1. 5

    I’m used to thinking of the Go ecosytem as pretty limited, so this is an interesting look at a few places where the line might run the other way.

    1. 3

      This surprises me: I tend to think of the Go language as limited, but things like library support, tooling, and the stdlib are either “just fine” or “pretty good”. What areas do you see as weak?

      1. 3

        An example from work the other day: Go has nothing in stdlib to help check if one map is a submap of another. Had to write it ourselves

        1. 1

          An example from work the other day: Go has nothing in stdlib to help check if one map is a submap of another. Had to write it ourselves

          Ah. Heh, I guess that’s where “Go the limited language” bleeds into “Go the stdlib” and that area does get painful.

          Off to implement Len(), Less(i,j) and Swap(i,j) again…

      2. 1

        Depends on which languages you are comparing it to. Most languages that people compare to Go or Rust are many times older, so they are of course going to have a much more mature ecosystem.

        For its age, I think Go’s ecosystem is very impressive. No doubt this is largely due to it being carried by Google’s name, but a success with an advantage is still a success.

        1. 1

          In the context, I meant as compared to rust :)

      1. 4

        I did this once, but with sam

        It didn’t really stick, but structural regular expressions almost kept me around.

        1. 11

          You might have heard of kakoune and how it’s a bit like vim, except that verb and object are reversed. One interesting and rarely-mentioned consequence is that you can do manipulation that is very close to what is described in the structural regexp document.

          Consider this hypothetical structural regex y/".*"/ y/’.*’/ x/[a-zA-Z0-9]+/ g/n/ v/../ c/num/ that they describe. Put simply, it is supposed to change each variable named n into num while being careful not to do so in strings (in \n for example).

          This exact processing can be done interactively in kakoune with the following key sequence: S".*"<ret>S'.*'<ret>s[a-z-A-Z0-9]+<ret><a-k>n<ret><a-K>..<ret>cnum<esc>

          It looks cryptic, but it’s a sequence of the following operations:

          • S".*"<ret> : split the current selection into multiple, such that the patterns ".*" are not selected anymore
          • S'.*'<ret> : same, but with '.*'
          • s[a-z-A-Z0-9]+<ret> : select all alphanumeric words from the current selections
          • <a-k>n<ret> : keep only the selections that contain a n
          • <a-K>..<ret> : exclude all selections that contain 2 or more characters
          • cnum<esc> : replace each selection with num

          And the nice thing about being interactive is that you don’t have to think up the entire command at once. You can simply do it progressively and see that your selections are narrowing down to exactly what you want to change.

          1. 7

            There’s also vis, which explicitly sets out to support structured regex.

            1. 2

              I can vouch for vis, I use it. Great editor!

            2. 2

              kakoune is definitely the best replacement to vim I’ve seen so far. But I enjoy the advantages of IDEs too much to switch (things like autocomplete, goto definition, catching errors while I type, etc.)

              Hopefully the Dance extension to vscode will become stable over time, and I can give it another shot.

              1. 8

                Kakoune has a pretty slick LSP plugin if the languages you work with have language servers available.

                1. 1

                  FWIW I just spent a whole hour trying to install it on Linux and failed. And let’s say I’m not exactly a noob.

                  To constrast, it took me only 10 mins to get it to work on neovim, and I’m equally unfamiliar with its ecosystem.

                2. 1

                  Everything you described is available in Neovim’s built-in LSP client or in one of the many LSP plugins for Vim.

                  1. 1

                    I’m not sure I get it. Neovim has a kakoune mode?

                    1. 1

                      I was referring to the “advantages of IDEs” (autocomplete, goto, diagnostics, renaming, documentation, etc.)

                3. 1

                  I wish for something that has the object/verb ordering that kakoune does, but with the same plugins that vim does. I tried kakoune for a week and loved using it, but my workflow was broken and I found it was taking me a lot longer to get productive again as I tried finding some replacements for plugins I depend on in vim (and either not finding anything, or often finding a broken/incomplete alternative).

                  1. 1

                    Really curious now what vim plugins you use? I’ve never tried any plugins, so not sure what they can add

                    1. 2

                      Here’s a short list of the plugins I use the most, losely in order of how much I depend on them

                      • deoplete (better [IMHO] autocomplete)

                      • ale (linting)

                      • fzf.vim (lots of integration with fzf)

                      • vimwiki (formatted note taking/organization)

                      • gitgutter (indicate changed/new/deleted lines not staged/committed)

                      • colorizer (changes hex text color codes to match the color they represent)

                      • file-beagle (nice file browsing within vim)

                      • indentline (I write a lot of python, visual indication of indention helps during long sessions)

                      I don’t recall which plugins I wasn’t able to find replacements for in kak.. I should give it another go soon since I really do like kak’s philosophy with object/verb.

                      1. 1

                        Interesting. Quite a few of those seem almost like adding IDE features to vim. Curious on your take on vim+plugins vs IDE with vi mode?

                        1. 2

                          vim + plugins all the way, since I can run that setup basically anywhere (e.g. headless remote systems, etc). I also conditionally enable some plugins based on the file type I am editing.

                4. 4

                  If anyone is interested in an updated version of sam, here ya go. Sadly I haven’t been able to give it the time it deserves lately, but it works well enough.

                1. 4

                  There are tons of Winamp skins but really nice looking ones are VERY RARE.

                  1. 3

                    Lots of them aren’t great. But what you’re seeing there is computing made accessible to all, and I think that’s admirable.

                    1. 2

                      “Subtlety” was not a thing in early consumer computing.

                    1. 11

                      I used to use a lot of util alternatives, but now I just stick to the standard ones: find, make, man, vim, grep, xargs, watch, etc.

                      You still have to learn them anyway because they’re installed everywhere by default. You can’t always install random tools on a system.

                      1. 14

                        Not sure I buy this argument.

                        Becoming facile with the standard available everywhere tools is a good investment, to be sure, but some of the tools cited offer a huge value add over the standard tools and can do things they can’t.

                        1. 2

                          In my own usage, I haven’t really noticed too much of a difference. The things I have noticed are that the non-standard tools are sometimes faster and they tend to use more common --flags. entr is a bit unique, but I’ve also been able to replace entr with watch to achieve basically the same result.

                          What are some of the huge value adds that I’m missing out on?

                          1. 5

                            Utilities like ripgrep and fd natively respecting your .gitignore is a huge feature.

                            1. 5

                              Oh, man. That’s actually the feature that made me leave ripgrep. I personally prefer to not hide a bunch of information by default. Let me filter it. I didn’t even realize it did that at first. I’ve spent hours debugging something only to realize that ripgrep was lying to me. Definitely user error on my part and I eventually found you can disable that behavior. But that made me realize grep suited my needs just fine.

                              1. 3

                                Yep, same here. I think it’s a horrible default and easily the biggest problem I’ve ever encountered with ripgrep.

                                1. 5

                                  I find that to be an ok default and for the odd time you don’t want that behaviour there’s --no-ignore-vcs which you can also set in your ripgrep config file

                                  1. 1

                                    FWIW, the automatic filtering by default is one of the most highly praised features of ripgrep that I’ve heard about. (And that in turn is consistent with similar praise for tools like ack and ag.)

                                    If you don’t want any filtering at all, ripgrep will behave like grep with rg -uuu <pattern>.

                                    1. 1

                                      I absolutely believe that, and I didn’t like it with ack and ag (yeah, I also progressed through all of them).

                                      I’m not saying you didn’t do the right thing if most people like it, but it’s not for me. Thanks for ripgrep and please don’t take the above comment as any sort of attack. (and yes, I’m using the form you mentioned).

                                      1. 1

                                        I didn’t, no problem. Just wanted to defend my choice. :)

                                  2. 1

                                    Hi, ripgrep author here. Out of curiosity, what made you try ripgrep in the first place? From what I hear, folks usually try it for one or both of performance and its automatic filtering capabilities. It seems like you didn’t like its automatic filtering (and I’ve tried to put that info within the first couple sentences of all doc materials), so that makes me curious what led you to try it in the first place. If you were only searching small corpora, perhaps the performance difference was not noticeable?

                                    1. 1

                                      In my case it was speed, in particular, replacing vimgrep

                                2. 3

                                  watch(1) as far as I know doesn’t do anything with inotify(7), so it’s limited to a simple interval instead of being event based. As others have pointed out, you could use inotifywait(1) and some relatively simple scripts to obtain similar results.

                                  That being said, I still use find, grep and others regularly, these are just a little easier to work with on the daily when they’re available. fd in particular has nicer query syntax for the common cases I have.

                                  1. 3

                                    Yeah, watch doesn’t watch (ha) for file changes, but my goal is to avoid having to rerun commands manually while I work on a file. watch(1) can achieve this goal. Yes, it’s inefficient to rerun the build every -n .1 seconds instead of waiting for file changes, but I end up reaching my goal either way, even if I have to wait 1 second for the build to start.

                                    Although, I can definitely see how this could be painful if the command is extremely slow.

                                    1. 5

                                      I work with Python… so it goes without saying that my tests are already slow, but also many of them are computationally expensive (numpy), so they can take several minutes to complete. Obviously that’s not the same situation for everyone, use what works for you! The whole point of my post is to share tools that others may not be aware of but may find helpful.

                                  2. 1

                                    What are some of the huge value adds that I’m missing out on?

                                    Thinking about this I may have over-promised, but one thing that occurs is vastly reduced cognitive load as well as a lot less straight up typing.

                                    For example I personally find being able to simply type: ag "foo" as opposed to: find . -name "*" -exec grep "foo" \; -print

                                3. 5

                                  I’m a developer, so it’s super worth it for me to optimize my main workflow. I know the standard tools well enough that I can pretty easily revert back to them if there is one, but some of these don’t have an alternative in the standard *nix toolset. entr(1) in particular has nothing equivalent as far as I know.

                                  1. 3

                                    How about inotifywait?

                                    1. 2

                                      entr is built on the same os API, inotify(7). You could probably get inotifywait to act in a similar manner with some scripting, but it does not do the same thing.

                                      1. 2

                                        That’s all I’ve ever used inotifywait for: watch some files or directories, and run scripts when change events occur. But I didn’t even know about entr. It does look relatively streamlined.

                                        1. 1

                                          Your script(s) have to implement the behavior that entr(1) implements though, so while it’s technically possible it’s far from a ‘single command’ experience, which I find very nice.

                                  2. 1

                                    If you have to work on systems with standard tooling a lot and no option to install stuff, fair. But for me, and I think a lot of other devs, it’s pretty feasible to install non-standard tools. I almost never work on another machine than my own. Most people customize their personal machine anyway.

                                    One rule I have is that scripts are always written in plain POSIX compliant shell for portability with the standard tools.

                                    1. 1

                                      I almost never work on another machine than my own.

                                      I wonder if this is a side effect of the trends towards infrastructure as code, “cattle not pets”, containerization, etc etc etc.

                                      In general I’ve found the same: the most work I do on a remote box is connecting to a running container and doing some smoke testing.

                                      1. 3

                                        I feel like in the era where shared UNIX hosts were more common, I would just cart binaries around in my home directory – which was often an NFS directory shared amongst many hosts as well.

                                  1. 5

                                    In case anyone else saw the “semi-group with an undecidable problem” and immediately tried to figure out what it is and why it is undecidable: it is apparently known as the “Tzeitin semigroup” or “the Tzeitin semigroup S7” in the literature searches I’ve done. I’ve yet to find the original proof, but it’s ~20 pages of Russian so I’d likely struggle to make much of it if I did find it.

                                    Either way, this seems like a fairly well known example of a semigroup for which the word problem is undecidable.

                                    1. 4

                                      For those interested, a proof is in Theorem 2.2 of Adian, Durnev, Decision problems for groups and semigroups, Russian Mathematical Surveys, Volume 55 (2002), Number 2, 207–296.

                                      I was also unable to find an online copy of Tseitin’s 1956 original, it looks like the archive of Doklady Akademii Nauk is only available starting from 1957.

                                    1. 7

                                      Promotions/raises, in my opinion, are not correlated with career growth. They’re more of an indication of the risk the company thinks it runs if you were to switch jobs.

                                      I think the short answer to your question is that we stop attempting to measure or expect growth in a certain direction after graduation. There are simply too many different ways to grow.

                                      If you feel strongly about code reviews or CI/CD, getting your coworkers and management on board with this certainly qualifies as career growth. But if you’re more interested in solving complex problems or refactoring a core component, that’s not necessarily wrong either.

                                      1. 2

                                        Promotions/raises, in my opinion, are not correlated with career growth. They’re more of an indication of the risk the company thinks it runs if you were to switch jobs.

                                        Oh yeah. I’m thinking back to an extremely well compensated coworker who was essentially the “last man standing” to support an old mainframe application responsible for nine digits of annual revenue. Others knew the system but he was the last of the original developers and when there were Very Bad Days he was the final escalation point.

                                        This arrangement worked well for him for many years until management decided that a team of Infosys contractors was as cheap as he was, offered 24x7 support instead of taking three months of paid vacation, and produced excellent documentation because they had to have something to hand to new team members.

                                        Optimizing for salary and title leads to local maxima. IMO: you should always find an interesting job and interview at least once a year even if you like your current job. You’ll quickly learn if you are paid well because your skills and ability is in demand, or if you’re a well paid last man standing.

                                        1. 1

                                          Exactly. I’ve received raises in situations where (unintentionally, and quite suddenly) I was the last man standing. On the other hand some of the periods where I’ve done some of my best work have gone unnoticed and (at least financially) unrewarded. If you look at your paycheck to see if you’ve accomplished something, prepare to be disappointed.

                                          With respect to the job security / remuneration trade-off described above, I’ve thought that being a open source developer on a widely-used component might represent the best of both worlds; On one hand you get to build expertise distinct from commodity developer skills (i.e. when you have to sell yourself as a C++/web/SAP developer), on the other hand you’re not just tied to one specific employer.

                                      1. 6

                                        It is simple (and cheap) to run your own mail server, they even sell them pre baked these days as the author wrote.

                                        What is hard and requires time is server administration (security, backups, availability, …) and $vendor black-holing your emails because it’s Friday… That’s not so hard that I’d let someone else read my emails, but YMMV. :)

                                        1. 8

                                          not so hard that I’d let someone else read my emails

                                          Only if your correspondants also host their own mail. Realistically, nearly all of them use gmail, so G gets to read all your email.

                                          1. 4

                                            I have remarkably few contacts on GMail, so G does not get to read all my email, but you’re going to say that I’m a drop in the ocean. So be it.

                                            1. 4

                                              you’re going to say that I’m a drop in the ocean. So be it.

                                              I don’t know what gave you that impression. I also host my own email. Most of my contacts use gmail. Some don’t. I just don’t think you can assume that anyone isn’t reading your email unless you use pgp or similar.

                                              1. 1

                                                Hopefully Autocrypt adoption will help.

                                                1. 2

                                                  This is the first time I’m hearing of Autocrypt. It looks like just a wrapper around PGP encrypted email?

                                                  1. 1

                                                    This is a practice described by a standard, that help widspread use of PGP : by flowing the keys all all around.

                                                    What if every cleartext email you received did already have a public PGP key attached to it, and that the mail client of everyone was having its own key, and did like so: sending the keys on every new cleartext mail?

                                                    Then you could answer to anyone with a PGP-encrypted message, and write new messages to everyone encrypted? That would bring a first level where every communication is encrypted with some not-so-string model where you exchanged your keys by whispering out every byte of the public key in base64 to someone’s ear alone in alaska, but as a first step, you brought many more people to use PGP.

                                                    I think that is the spirit, more info on https://autocrypt.org/ and https://www.invidio.us/watch?v=Jvznib8XJZ8

                                                    1. 2

                                                      Unless I misunderstand, this still doesn’t encrypt subject lines or recipient addresses.

                                                      1. 1

                                                        Like you said. There is an ongoing discussion for fixing it for all PGP at once, including Autocrypt as a side effect, but this is a different concern.

                                            2. 1

                                              Google gets to read those emails, but doesn’t get to read things like password reset emails or account reminders. Google therefore doesn’t know which email addresses I’ve used to give to different services.

                                            3. 4

                                              Maybe I’m just out of practice, but last time I set up email (last year, postfix and dovecot) the “$vendor black-holing your emails” problem was the whole problem. There were some hard-to-diagnose problems with DKIM, SPF, and other “it’s not your email, it’s your DNS” issues that I could only resolve by sending emails and seeing if they got delivered, and even with those resolved emails that got delivered would often end up in spam folders because people black-holed my TLD, which I couldn’t do anything about. As far as I’m concerned, email has been effectively embraced, extended, and extinguished by the big providers.

                                              1. 4

                                                This was my experience when I set up and ran my own email server: everything worked perfectly end to end, success reports at each step … until it came time to the core requirement of “seeing my email in someone’s inbox”. Spam folder. 100% of the time. Sometimes I could convince gmail to allow me by getting in their contact/favorite list, sometimes not.

                                                1. 1

                                                  I wonder how much this is a domain reputation problem. I’ve hosted my own email for well over a decade and not encountered this at all, but the domain that I use predates gmail and has been sending non-spam email for all that time. Hopefully Google and friends are already trained that it’s a reputable one. I’ve registered a different domain for my mother to use more recently (8 or so years ago) and that she emails a lot of far less technical people than most of my email contacts and has also not reported a problem, but maybe the reputation is shared between the IP and the domain. I do have DKIM set up but I did that fairly recently.

                                                  It also probably matters that I’ve received email from gmail, yahoo, hotmail, and so on before I’ve sent any. If a new domain appears and sends an email to a mail server, that’s suspicious. If a new domain appears and replies to emails, that’s less suspicious.

                                                  1. 2

                                                    Very possible. In my case I’d migrated a domain from a multi-year G-Suite deployment to a self-hosted solution with a clean IP per DNSBLs, SenderScore, Talos, and a handful of others I’ve forgotten about. Heck, I even tried to set up the DNS pieces a month in advance – PTR/MX, add to SPF, etc. – in the off chance some age penalty was happening.

                                                    I’m sure it’s doable, because people absolutely do it. But at the end of the day the people I cared about emailing got their email through a spiteful oracle that told me everything worked properly while shredding my message. It just wasn’t worth the battle.

                                              2. 3

                                                That’s not so hard that I’d let someone else read my emails

                                                Other than your ISP and anyone they peer with?

                                                1. 2

                                                  I have no idea how bad this is to be honest, but s2s communications between/with major email providers are encrypted these days, right? Yet, if we can’t trust the channel, we can decide to encrypt our communication too, but that’s leading to other issues unrelated to self-hosting.

                                                  Self-hosting stories with titles like “NSA proof your emails” are probably a little over sold 😏, but I like to think that [not being a US citizen] I gain some privacy by hosting those things in the EU. At least, I’m not feeding the giant ad machine, and just that feels nice.

                                                  1. 7

                                                    I’m a big ‘self-hosting zealot’ so it pains me to say this…

                                                    But S2S encryption on mail is opportunistic and unverified.

                                                    What I mean by that is: even if you configure your MTA to use TLS and prefer it; it really needs to be able to fall back to plaintext given the sheer volume of providers who will both: be unable to recieve and unable to send encrypted mails, as their MTA is not configured to do encryption.

                                                    It is also true that no MTA I know of will actually verify the TLS CN field or verify a CA chain of a remote server..

                                                    So, the parent is right, it’s trivially easy to MITM email.

                                                    1. 3

                                                      So, the parent is right, it’s trivially easy to MITM email.

                                                      That is true, but opportunistic and unverified encryption did defeat passive global adversaries or a passive MITM. These days you have to become active as an attacker in order to read mail, which is harder to do on a massive scale without leaving traces than staying passive. I think there is some value in this post-Snowden situation.

                                                      1. 1

                                                        What I’ve done in the past is force TLS on all the major providers. That way lots of my email can’t be downgraded, even if the long tail can be. MTA-STS is a thing now though, so hopefully deploying that can help too. (I haven’t actually done that yet so I don’t actually know how hard it is. I know the Postfix author said implementation would be hard though.)

                                                  2. 1

                                                    I get maybe 3-4 important emails a year (ignoring work). The rest is marketing garbage, shipping updates, or other fluff. So while I like the idea of self hosting email, I have exactly zero reason to. Until it’s as simple as signing up for gmail, as cheap as $0, and requires zero server administration time to assure world class deliverability, I will continue to use gmail. And that’s perfectly fine.

                                                    1. 7

                                                      Yeah, I don’t want self-hosted email to be the hill I die on. The stress/time/energy of maintaining a server can be directed towards more important things, IMO

                                                  1. 5

                                                    I have read for years about how evil is email enumeration… but guess what? I think the benefits of being able to tell a user that is using the wrong username instead of a wrong password, outweighs any theoretical danger of revealing that certain email is being used. Change my mind.

                                                    1. 10

                                                      I’ll take a stab at trying to change your mind. For some context I’m a Penetration Tester by trade and this specific topic is, in my opinion a great example of subtle risks with huge real world impacts.

                                                      The issue of username/email enumeration has two attack patterns:

                                                      • Password spraying - Guessing a weak password across tons of accounts, like bruteforcing but trying to find the email with the weak password not the weak password for the email.
                                                      • Password “stuffing” - Taking a known compromised credential and trying to authenticate to tons of other services that the credential pair was re-used at

                                                      For password spraying, there is only one thing I actually need: a username/email. In the real world I go from an External Network Penetration Test to internal network access ~80% of the time because of username enumeration and some strategically guessed passwords. Having the ability to get a list of known usernames to target greatly reduces the amount of guesses I have to make and ramps my accuracy up a ton.

                                                      For a full example, say I am targeting your corporate mail server based off of Exchange or O365 to try and guess credentials that I can then re-use on the target VPN infrastructure. My very first step is to grab a list of known emails/usernames from previous password dumps, public information, or directories. Then I generate a list of potential name combinations from location specific birth information by year. Next comes the actual username enumeration where I try and identify the “valid” accounts (aka what you are asking). In my example, Microsoft agrees with you and doesn’t believe that username/email enumeration is a risk… Which is why I wrote a ton of tooling to automatically use NTLM/HTTP timing based responses to enumerate the valid users. Now armed with a list of what are guaranteed usernames/emails, I just start picking down the list of the seasons hottest passwords over the next few days; Summer2020!, Password1!, Companyname2020!. All I really need is one credential. It’s not about the single user, it’s about the bulk knowledge. If I was going in blind without the confirmed accounts then I would be generating tons and tons more traffic and would be even easier to flag on, having enumeration puts the statistics of getting automated guesses way way more on the attackers side.

                                                      The other example is password stuffing. This is more straight forward, given that I have a compromised username/email and password for a user I can take a bot that knows how to authenticate to tons of different services (banks, social media, blah blah blah) and try those combinations. If username enumeration exists on these services it actually allows me to check to see if accounts are valid for the service before actually submitting my automated logins. If I am a bot herder my job is to try and stay undetected for as long as possible and the enumeration greatly assists in that.

                                                      Hopefully that helps! It’s one of those strange things where people forget about the collective risk and focus more on the singular threat models, attackers rarely care about the individual irl.

                                                      1. 4

                                                        This is great advice. And it really reinforces for me why appsec people should be way more involved in the software development process as early as possible.

                                                        At a previous job we were identified by nine digit numeric characters (no, not those nine digits!). I built a public facing API for internal use that returned public facing data created by employees. No problem, thinks me. But I left the SSO ID on the API because why not? Ship it!

                                                        A few days later one of the blue team guys sends me an email with 2/3rd of my database, exfiltrated by walking the API with a dictionary file and explains what you just explained above. Oops.

                                                        1. 2

                                                          Not a pen-tester, but I would’ve assumed allowing Password1! as a valid password is a bigger issue than email enumeration. You can now check against lists of bad passwords from dumps.

                                                          1. 2

                                                            You’d think right? But you are fighting human nature and historical IT theories. As it turns out making a comprehensive deny list is extremely difficult, and then you add the fact that hashing is in play the only time it gets checked is at the filter level when changing that credential. You can’t just look up your passwords in your ntds.dit and compare it with historical dumps (I try and do that for my clients because the reality is the offensive tools are actually better at it than the defensive). As for historical reasons, often times IT resets credentials to a weak or organizationally default credential and it never gets changed, support desk staff often don’t remember to check the “change after first login” checkbox.

                                                            Like I said, it only takes one. Also password patterns follow human nature in more ways than one, I’ve been popping my American clients that have comprehensive blocklists left and right with Trump2020!. Passwords suck haha.

                                                            EDIT: To add another thing think about Password1!, lots of orgs have an 8 character password with special and numerical requirement. Technically it fits lots of places. If there is organizational SSO if the filters are not forced everywhere it can also propagate to other authentication areas.

                                                            1. 2

                                                              To add another thing think about Password1!, lots of orgs have an 8 character password with special and numerical requirement.

                                                              Even better is to have entropy requirements, including dictionary files. zxcvbn is a good example of a frontend library for this.

                                                              You can also compare hashes with the HIBP Pwned Passwords dataset and reject new passwords that match.

                                                              1. 1

                                                                Are there other databases than HIBP that are commonly used for this?

                                                                1. 2

                                                                  I don’t know. Pwned Passwords has 573 million SHA1 hashes, so I’ve not felt the need to look further.

                                                          2. 1

                                                            This is great advice. Thank you for writing such a comprehensive answer.

                                                          3. 1

                                                            Aside from the technical side explored by other replies, depending on your location and/or the location of your users, you could face legal consequences. Under legislation such as the GDPR, an email address is considered personally identifying information. If someone realises that you are leaking such personal information and reports you, you could face a fine. In some cases, the user may also claim compensation from you. If the user suffers a loss due to your failure to safeguard their data, then it could a large amount of money. (e.g. Imagine you run a site which is legal, but not considered socially acceptable. A public figure signs up using their email address. Someone uses email enumeration to discover that said public figure has an account on your site, causing damage to their reputation and consequent loss of earnings)

                                                          1. 24

                                                            That headline is pretty confusing. It seems more likely twitter itself was compromised, than tons of individual users (billionaires, ex-leaders, etc)?

                                                            1. 18

                                                              You’re right. This is a case of Verge reporting what they’re seeing, but the scope has grown greatly since the initial posts. There have since been similar posts to several dozen prominent accounts, and Gemini replied that it has 2FA.

                                                              Given the scope, this likely isn’t accounts being hacked. I suspect that either the platform or an elevated-rights Twitter content admin has been compromised.

                                                              1. 12

                                                                Twitter released a new API today (or was about to release it? Not entirely clear to me what the exact timeline is here), my money is on that being related.

                                                                A ~$110k scam is a comparatively mild result considering the potential for such an attack, assuming there isn’t some 4D chess game going on as some are suggesting on HN (personally, I doubt there is). I don’t think it would be an exaggeration to say that in the hands of the wrong people, this could have the potential to tip election results or even get people killed (e.g. by encouraging the “Boogaloo” people and/or exploiting the unrest relating to racial tensions in the US from some strategic accounts or whatnot).

                                                                As an aside, I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                                                                1. 13

                                                                  or even get people killed

                                                                  If the Donald Trump account had tweeted that an attack on China was imminent there could’ve been nuclear war.

                                                                  Sounds far-fetched, but this very nearly happened with Russia during the cold war when Reagan joked “My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.” into a microphone he didn’t realize was live.

                                                                  1. 10

                                                                    Wikipedia article about the incident: https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes

                                                                    I don’t think things would have escalated to a nuclear war that quickly; there are some tensions between the US and China right now, but they don’t run that high, and a nuclear war is very much not in China’s (or anyone’s) interest. I wouldn’t care to run an experiment on this though 😬

                                                                    Even in the Reagan incident things didn’t seem to have escalated quite that badly (at least, in my reading of that Wikipedia article).

                                                                    1. 3

                                                                      Haha. Great tidbit of history here. Reminded me of this 80’s gem.

                                                                      1. 2

                                                                        You’re right - it would probably have gone nowhere.

                                                                    2. 6

                                                                      I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said

                                                                      It’d be nice to think so.

                                                                      It would be somewhat humorous if an attack on the internet’s drive-by insult site led to such a thing, rather than the last two decades of phishing attacks targeting financial institutions and the like.

                                                                      1. 3

                                                                        I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                                                                        A built-in system in the browser could create a 2FA system while being transparent to the users.

                                                                        1. 5

                                                                          2fa wouldn’t help here - the tweets were posted via user impersonation functionality, not direct account attacks.

                                                                          1. 0

                                                                            If you get access to twitter, or the twitter account, you still won’t have access to the person’s private key, so your tweet is not signed.

                                                                            1. 9

                                                                              Right, which is the basic concept of signed messages… and unrelated to 2 Factor Authentication.

                                                                              1. 2

                                                                                2FA, as I used it, means authenticating the message, via two factors, the first being access to twitter account, and the second, via cryptographically signing the message.

                                                                                1. 3

                                                                                  Twitter won’t even implement the editing of published tweets. Assuming they’d add something that implicitely calls their competence in stewarding people’s tweets is a big ask.

                                                                                  1. 2

                                                                                    I’m not asking.

                                                                        2. 2

                                                                          A ~$110k scam

                                                                          The attacker could just be sending coins to himself. I really doubt that anyone really falls for a scam where someone you don’t know says “give me some cash and I’ll give you double back”.

                                                                          1. 15

                                                                            I admire the confidence you have in your fellow human beings but I am somewhat surprised the scam only made so little money.

                                                                            I mean, there’s talk about Twitter insiders being paid for this so I would not be surprised if the scammers actually lost money on this.

                                                                            1. 10

                                                                              Unfortunately people do. I’m pretty sure I must have mentioned this before a few months ago, but a few years ago a scammer managed to convince a notary to transfer almost €900k from his escrow account by impersonating the Dutch prime minister with a @gmail.com address and some outlandish story about secret agents, code-breaking savants, and national security (there’s no good write-up of the entire story in English AFAIK, I’ve been meaning to do one for ages).

                                                                              Why do you think people still try to send “I am a prince in Nigeria” scam emails? If you check you spam folder you’ll see that’s literally what they’re still sending (also many other backstories, but I got 2 literal Nigerian ones: one from yesterday and one from the day before that). People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                                                              Also, the 30 minute/1 hour time pressure is a good trick to make sure people don’t think too carefully and have to make a snap judgement.

                                                                              As a side-note, Elon Musk doing this is almost believable. My friend sent me just an image overnight and when I woke up to it this morning I was genuinely thinking if it was true or not. Jeff Bezos? Well….

                                                                              1. 12

                                                                                People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                                                                I’ve posted this research before but it’s too good to not post again.

                                                                                Advance-fee scams are high touch operations. You typically talk with your victims over phone and email to build up trust as your monetary demands escalate. So anyone who realizes it’s a scam before they send money is a financial loss for the scammer. But the initial email is free.

                                                                                So instead of more logical claims, like “I’m an inside trader who has a small sum of money to launder” you go with a stupidly bold claim that anyone with a tiny bit of common sense, experience, or even the ability to google would reject: foreign prince, huge sums of money, laughable claims. Because you are selecting for the most gullible people with the least amount of work.

                                                                          2. 5

                                                                            My understand is that Twitter has a tool to tweet as any user, and that tool was compromised.

                                                                            Why this tool exists, I have no idea. I can’t think of any circumstance where an employee should have access to such a tool.

                                                                            Twitter has been very tight-lipped about this incident and that’s not a good look for them. (I could go on for paragraphs about all of the fscked up things they’ve done)

                                                                            1. 5

                                                                              or an elevated-rights Twitter content admin

                                                                              I don’t think content admins should be able to make posts on other people’s account. They should only be able to delete or hide stuff. There’s no reason they should be able to post for others, and the potential for abuse is far too high for no gain.

                                                                              1. 6

                                                                                Apparently some privileges allow internal Twitter employees to remove MDA and reset passwords. Not sure how it played out but I assume MFA had to be disabled in some way.

                                                                              1. 5

                                                                                That’s a good article! Vice has updated that headline since you posted to report that the listed accounts got hijacked, which is more accurate. Hacking an individual implies that the breach was in their control: phone, email, etc. This is a twitter operations failure which resulted in existing accounts being given to another party.

                                                                            1. 3

                                                                              1311 units, 63 prefixes?! My Mac only comes with 586 units, 56 prefixes and millilightseconds isn’t among them. How do I get more?

                                                                              1. 5

                                                                                Trey Harris (in response to a similar question at the time) explained: “I used to populate my units.dat file with tons of extra prefixes and units.” In any case, lightseconds (and therefore millilightseconds) is in the standard definitions.units file on Linux these days, so perhaps you could grab a better definitions.units out of https://www.gnu.org/software/units/ if nothing else. (On my machine, the standard units starts up with 2919 units, 109 prefixes, and 88 nonlinear units.)

                                                                                1. 4

                                                                                  brew install gnu-units && gunit

                                                                                  1. 4

                                                                                    Yesssssssssssssssss

                                                                                    3460 units, 109 prefixes, 109 nonlinear units
                                                                                    

                                                                                    Thanks Owen!

                                                                                1. 2

                                                                                  Timely article - currently looking at a backup solution for my NAS.

                                                                                  I’m a bit bummed about Backblaze, though. My reading is that their $60 annual plan is for Windows/Mac only; I could cobble together something that targets their B2 service but “cobble together Frankenstein’s monster” is exactly what scares me.

                                                                                  Anyone know of a similar reliable flat-fee plan targeting Linux that lets me not care about backups? Or am I destined to be building shell scripts to send stuff to Glacier for the rest of my days…

                                                                                    1. 2

                                                                                      [This is my personal opinion and does not reflect the opinions of my employers yada yada yada :)]

                                                                                      Backblaze is pretty great. In addition to supporting Linux as a first class citizen, they’re also very permissive around what’s considered a ‘computer’ so in my case I’m paying $60/year to back up the entirety of my Synolgoy NAS (currently running at around 3TB storage used).

                                                                                      Really appreciate those guys!

                                                                                    2. 2

                                                                                      I haven’t tried, but it looks like rclone supports backblaze.

                                                                                      1. 1

                                                                                        Respectfully, your reading is incorrect. My Synology NAS is currently backing itself up to Backblaze, and as others in this thread have mentioned they provide LInux client support.

                                                                                        1. 1

                                                                                          For their $60 unlimited plan? Or B2?

                                                                                          If the latter I’d have to pass and my questions stand around flat-priced plans with Linux support.

                                                                                          If the former, I’m going to have to figure out how I’ve misread all their documentation so badly ;)

                                                                                          1. 2

                                                                                            You’re right it’s B2. However I’m storing ~3TB for $10/mo which is still 50% less than I was paying with that other provider you mentioned :)

                                                                                            [Dancing carefully since this is my employer. Opinions are my own etc etc ;)]

                                                                                            1. 1

                                                                                              I think I’m picking up what you’re putting down, thanks ;) I’ll go run some more numbers!

                                                                                        2. 1

                                                                                          I use duply which is a front-end to duplicity that simplifies its most common operations[*]. The backend I use is Backblaze B2 and it’s pretty seamless and not Frankenstein monster-esque (at least by my standards).

                                                                                          The fiddling for the B2 part is limited to creating a bucket and some auth credentials and putting the right string in the duply profile file.

                                                                                          $60/year buys you 1TB with B2 (but you would pay more to download it all in event of data loss).

                                                                                          I haven’t done a full recovery yet (touch wood) but I’ve recovered individual files a few times without any hassle.

                                                                                          I also follow a similar scheme to the article, so my B2 backup is only there as a fallback if my local (NAS) backups fail for some reason.

                                                                                          [*] I did use duplicity, but found duply handles my use case fine without the custom scripting I used to do.

                                                                                        1. 12

                                                                                          I’ve definitely reconsidered my commenting since getting the warning. I used to reply to people who I felt were wrong instead of flagging, but since doing that just attracts flags that adds to my quota I’ve starting flagging instead.

                                                                                          1. 3

                                                                                            Without knowing the character of your replies (which is to say, if the replies in question are nasty and toxic, I would not feel this way) it seems at first blush that the change this inspired in you is not a constructive one.

                                                                                            1. 3

                                                                                              In the cases where I was flagged heavily my comments were neutral in tone.

                                                                                              1. 17

                                                                                                Flagging is not always done out of good faith. People may flag your otherwise neutral comment if they get personally offended. And mods explicitly wont listen to your feedback.

                                                                                                IMO this whole “Reconsider your behavior or take a break” injunction is pretty stupid and condescending.

                                                                                                1. 6

                                                                                                  Flagging as implemented on this site seems to be way too prone to abuse. Their open visibility encourages pile-ons, and the automated warning is confusing and raises umbrage. Flagging users who are trolling or spamming should be a signal to the moderators, not to the rest of the community.

                                                                                                  1. 1

                                                                                                    I think the time and point threshold to show scores is a great thing here but could perhaps be tweaked to help. Maybe make it a bit higher/lower before it shows the scores, or make comments age a bit longer before showing a score?

                                                                                                    The overall moderation model here seems solid enough. But if we wanted to get real wild Slashdot’s moderation/meta-moderation system could be an interesting fit ;)

                                                                                                    1. 1

                                                                                                      Trouble with the time delay is that highly engaged (ie controversial) topics stay on the front page for a lot longer than most posts.

                                                                                          1. 9

                                                                                            If you’re thinking of learning Perl or starting a new project in Perl, you might want to reconsider.

                                                                                            I don’t see anything controversial with this statement. You can apply it to any number of languages, not just Perl - Common Lisp, Pascal, COBOL. And I say this as a Perl aficionado.

                                                                                            Most languages never make it past compiling themselves. Perl has had a great run and is still super-fun and useful for those that know it.

                                                                                            1. 6

                                                                                              Hey, I start new projects in Common Lisp! It’s still a great choice.

                                                                                              1. 3

                                                                                                And I start personal projects in Perl… it doesn’t mean that corporations with multi-year maintenance horizons do, though.

                                                                                                1. 2

                                                                                                  AFAIK google uses lisp and has contributed to sbcl. And fastmail uses perl.

                                                                                                  1. 2

                                                                                                    If Google uses lisp it’s for something incredibly specific and obscure. You absolutely could not start a new lisp project at Google.

                                                                                                    1. 4

                                                                                                      The shambling corpse of ITA Software perhaps?

                                                                                                      1. 1

                                                                                                        I actually don’t know. I could look it up, but I couldn’t say if I did. ¯\_(ツ)_/¯

                                                                                            1. 5

                                                                                              Please treat yourself to the IANA tz@ mailing list. The research Eggert and team do, both around real-time updates to the list and historical, is astonishingly fascinating:

                                                                                              The actual tz database is a wealth of historical data as well – just look at North America and the complexity therein.

                                                                                              1. 1

                                                                                                Agreed, I was on this list back when Bush brought in the DST changes in the US, and it’s a fascinating rabbit-hole to dive down and more complicated than you would have ever thought.

                                                                                              1. 1

                                                                                                By necessity, roles at $BIGCO are logically separated. By learned habit, gatekeeping between worlds is rampant. Ironically, this sort of cross functional work is highly valued, but if you’re drawn to this path, you’ll find yourself doing much more cutting through red tape than real work.

                                                                                                Ugh.

                                                                                                And one of the few acceptable ways many companies allow you to push cross-functional work is by making standards and developing process and even if you’re doing good things you’re still accreting more layers.

                                                                                                1. 1

                                                                                                  Sometimes it isn’t habit, but regulation. I can not push to production at my job, and the team that can won’t do it without appropriate managerial sign-offs.

                                                                                                1. 5

                                                                                                  Y’all are spending more than $200 on monitors?

                                                                                                  1. 31

                                                                                                    My grandfather, god rest him, was a frugal man. But he often said the two things you should spend extra on are shoes and mattresses, because “when you ain’t in one you’re in the other!” Maybe monitors are shoes for programmers.

                                                                                                    But the ridiculously high end strikes me as maybe a bit much: my quality of life (legit, less squinting and headaches) improved with a 27” 4K monitor, but that was in the $300s.

                                                                                                    1. 14

                                                                                                      I am a cheapskate. I am loathe to spend money.

                                                                                                      My office chair costs $750. I sit in it for a minimum of eight hours a day.

                                                                                                      1. 3

                                                                                                        I feel this way about monitors and keyboards. I’ll pay much more for good input/output devices, because that’s how I interact with the computer.

                                                                                                        Personally, I would want a 60hz 5k at 27”, or a 60hz 8k at 32-34”. It annoys me that the screen on my computer (16” MBP) is better than any external monitor I could reasonably hope to use.

                                                                                                        1. 1

                                                                                                          The Dell UP3218K is 31.5” and 8K, but it’s also $3,300, and only works with computers that support DisplayPort Multi Stream Transport over two DisplayPort ports.

                                                                                                          1. 1

                                                                                                            Yeah, it’s going to be a few years.

                                                                                                        2. 1

                                                                                                          Yeah, came here to mention you can get 4K for way less than any of the monitors suggested in the post. I got a matte LG for $250 a while back.

                                                                                                          I have to admit I thought a game running at 30hz felt “smooth” so I’m not sure I could see 60 vs. 120 without a slow-motion camera. YMMV, of course.

                                                                                                          1. 2

                                                                                                            Things with a lot of motion will appear smoother than things sitting completely still. A 30Mhz desktop, with most things not moving (wallpaper, etc) will flicker like crazy since there’s no movement to mask the flicker.

                                                                                                            1. 2

                                                                                                              A 30Hz desktop that’d be, we’re still a few centuries away from a 30MHz refresh rate.

                                                                                                              1. 2

                                                                                                                pft Your monitor takes longer than Planck time to draw a full frame? n00b.

                                                                                                                1. 3

                                                                                                                  Nah, mine is so fast the photons end up in a traffic jam trying to work their way out of the screen, talk about red shift. Or maybe the CCFT is going bad, who knows…

                                                                                                              2. 1

                                                                                                                Interesting. FWIW, I wasn’t trying to say anyone should go down to 30Hz (or up to 30MHz heh) just that I, personally, probably wouldn’t feel much benefit from 120, given I was able to mix up lower refresh rates.

                                                                                                              3. 2

                                                                                                                You will notice running a desktop at 30Hz. When I got my 4k monitor a few years ago, it turned out my USB-C <-> HDMI adapter could only do 4k@30Hz. It was disturbing ;).

                                                                                                                1. 2

                                                                                                                  Oh, yeah, I wasn’t arguing for actively downgrading to 30hz, just saying I probably wouldn’t feel much benefit from going to 120 given my rough perception of smoothness. I see how it reads different.

                                                                                                            2. 5

                                                                                                              I spend >$200 on frying pans, for the same reason others mention shoes and beds. It’s something I use every day, and the slight increase in cost per use is well worth it having a tool I enjoy using.

                                                                                                              Edit I’d also like to add that I’m in an economic situation that allows me to consider $200 purchases as “not a huge deal”. I do remember a time of my life when this was emphatically not the case.

                                                                                                              1. 4

                                                                                                                Funny that, I also care about things like that… which is why I got them for free from abandoned houses and even, once, abandoned in a ditch by the roadside. That is where you’ll find old rusting cast-iron skillets in need of just a bit of TLC with a rotary steel brush, a coat of oil and a bake in the oven. The one I found in the ditch was quite fancy albeit rusty, a large Hackman with a stainless steel handle. How it ended up in that ditch in the Swedish countryside I have no idea, I never saw any mentioning of any unsolved murder case for lack of the evidence in the form of the obviously heavy blunt object used to bash in the skull of the unfortunate victim. It was slightly pitted but the steel brush made it almost like new. I use these on a wood-fired stove, just what they’re made for.

                                                                                                                Beds I always made myself (including high wall-mounted rope-suspended sailing-ship inspired ones with retractable ladders which you’d be hard-pressed to find elsewhere) , shoes occasionally (basic car tyre sandals). I find it far more satisfying to spend some time in making something from either raw or basic materials (beds, sandals) or revive from abandonment (cookware, computing equipment, electronics, etc) than to just plunk down more money. Another advantage is that stuff you made yourself usually can be fixed by yourself as well so it lasts a long time.

                                                                                                              2. 3

                                                                                                                I just upgraded my home office monitor for about $30. Suffice to say it’s not 4k, IPS or any of these things considered ‘essential’ for developers. Fourteen years old, it is, however, significantly better and sharper than the monitors most programmers worked on until the 1990s. And they did better work than I ever did.

                                                                                                                If you like spending money on monitors, be my guest, but if you write a blog insisting others should do the same, I think we should call this article out for what it is: promotion of conspicuous consumption.

                                                                                                                1. 2

                                                                                                                  If you write graphical applications or websites it makes sense to have something reasonably good and at least with a high pixel density, because if you work on a website only with a loPDI display and try it on a hiDPI display later you will likely be surprised!

                                                                                                                  It doesn’t have to be top notch, the idea is just to get reasonably close to what Apple calls “Retina”. I can find IPS, 27” 4K displays around €400 on the web.

                                                                                                                  Also, it’s not exactly the same use case but a lot of entry-level phones and tablets have very nice displays nowadays.

                                                                                                                  1. 1

                                                                                                                    if you work on a website only with a loPDI display and try it on a hiDPI display later you will likely be surprised!

                                                                                                                    does this not work both ways?

                                                                                                                    1. 1

                                                                                                                      Not really in my experience. CSS units (px, em) are density-aware and scale nicely, browsers take care to draw lines thinner than one pixel properly, and even downscaling a raster image isn’t always a big deal given how fast modern computers are.

                                                                                                                      1. 3

                                                                                                                        i can only speak for myself, but using a 1024x768 screen for the web has been a pretty poor experience in recent years. a lot of the time fonts are really large and there is a lot of empty space, making for extremely low information density and often requiring scrolling to view any of the main content on a page. sometimes 25-50% of the screen is covered by sticky bars which don’t go away when you scroll. it makes me think web developers aren’t testing their websites on screens like mine.

                                                                                                                        1. 1

                                                                                                                          Some websites really suck, no doubts about it. But web standards are carefully designed to handle different pixel densities and window sizes properly, even if they can’t ensure that websites don’t suck.

                                                                                                                          For example, many bad websites change the default size of the text to something ridiculously big or small. This is a really bad practice. Better websites don’t change the default font size much, or (even better) don’t change it at all, and use em and rem CSS units in order to make everything relative to the font size so the whole website scales seamlessly when zooming in and out.

                                                                                                                          Note that if your browser/operating system is not aware of the pixel density of your display, everything will be too big or too small by default. Basically, zooming in/out is the way to fix it. If you want to test your setup with a reference, well-designed and accessible website you can use some random page on the Mozilla docs.

                                                                                                                          1. 1

                                                                                                                            Some websites really suck, no doubts about it. But web standards are carefully designed to handle different pixel densities and window sizes properly, even if they can’t ensure that websites don’t suck.

                                                                                                                            And you’re saying this means a web designer with a high DPI display can rest assured that his website will look good on a low DPI display, as long as he follows certain practices?

                                                                                                                            Why doesn’t the same apply in the reverse case, where the designer has a low DPI display and wants their website to be usable on a high DPI display?

                                                                                                                            I have to say even the MDN site wastes a lot of space, and the content doesn’t begin until half way down the page. There’s a ton of space wasted around the search bar and the menu items in the top bar, and around the headers and what appear to be <hr>’s.

                                                                                                                            1. 1

                                                                                                                              And you’re saying this means a web designer with a high DPI display can rest assured that his website will look good on a low DPI display, as long as he follows certain practices?

                                                                                                                              Yes I think so. In fact Chrome and Safari have low DPI simulators in their dev tools.

                                                                                                                              Why doesn’t the same apply in the reverse case, where the designer has a low DPI display and wants their website to be usable on a high DPI display?

                                                                                                                              Well it does to some extent, but typically you have to be careful with pictures. Raster images won’t look sharp on high DPI displays unless you’re using things like srcset. Of course it’s absolutely not a deal breaker but it is something to have in mind if you do care about graphics.

                                                                                                                              In anyway, I think the vast majority of web designers are using high DPI displays nowadays.

                                                                                                                              I have to say even the MDN site wastes a lot of space, and the content doesn’t begin until half way down the page. There’s a ton of space wasted around the search bar and the menu items in the top bar, and around the headers and what appear to be ’s.

                                                                                                                              Indeed, and the header also wastes a lot of space (though not the half) on my high DPI 13” display. It’s a bit funny because I didn’t notice it earlier: When I’m looking for something on this website, my eyes just ignore all the large header and I start searching or scrolling immediately.

                                                                                                                              But this “big header” effect is less present on “desktop mode” so you should try to zoom out if the font size isn’t too small for you. I’ve tested it with the device simulator in Safari at about 1220x780 and it does not look that bad to my eyes.

                                                                                                                              1. 1

                                                                                                                                Well it does to some extent, but typically you have to be careful with pictures. Raster images won’t look sharp on high DPI displays unless you’re using things like srcset. Of course it’s absolutely not a deal breaker but it is something to have in mind if you do care about graphics.

                                                                                                                                Yeah I guess this is the one area where low DPI displays could be easier to target without personally testing with one. A large image shrunk will look fine, while a small image enlarged will look like dog shit.

                                                                                                                                The use of high DPI displays by most web designers probably explains why modern sites look so shitty on low DPI displays. But that also means you won’t get fired for making a site that looks shitty on low DPI displays. It also makes sense from a corporate perspective, as high DPI displays are more likely to be used by wealthier people who will be a larger source of revenue, even if low DPI displays are still in widespread use.

                                                                                                                  2. 1

                                                                                                                    A decent monitor lasts a good 3-5 years, possibly longer, but let’s say 3 and be pessimistic. What is a $1,000 monitor worth, as a percentage of your salary over three years? More to the point, what is it as a fraction of the total cost of employing you for three years? According to Glass Door, the average salary for a software developer in the USA is around $75K. Including all overheads, that likely means that it costs around $150K a year in total to employ a developer. Over three years, that’s $450K. Is a $1,000 monitor going to make a developer 0.2% more productive over three years than a $200 monitor? If so, it’s worth buying.

                                                                                                                  1. 4

                                                                                                                    Not for me. My eyes hurt badly when they try to read light text on a dark background, no matter what my environment is. It’s bad enough where I have to enable reader mode, or if that doesn’t work, either not read it at all or copy the text somewhere else.

                                                                                                                    1. 3

                                                                                                                      Amusingly enough I had my system configured for dark mode when I read the site and thought I rather liked it. Then I came back here, read your comment, and realized I much prefer black on white text because Lobsters doesn’t have a dark mode.

                                                                                                                    1. 4

                                                                                                                      I’ve been a happy user for ~5 years. In the last 20 years of heavily using email, it’s been overall the best mail user agent I’ve seen. Strongly recommended.

                                                                                                                      Some things that set mu4e apart for me: Super fast search and UX, ability to handle multiple email addresses transparently, ability to use Org mode capture templates and good support for viewing HTML emails. There’s many more good reasons, if we want to get geekier, for example the ability to use procmail (spam, auto sorting, etc), the ability for offline mails, very good GPG support, and much more^^

                                                                                                                      The initial configuration is a little bit of work, but it’s paying off for years to come. If you’re looking for a setup apart from the docs, here’s my config: https://github.com/munen/emacs.d/blob/master/configuration.org#mu4e

                                                                                                                      1. 3

                                                                                                                        I’d second this.

                                                                                                                        Super fast search

                                                                                                                        This is one of the major wins. Mu’s Xapian backend chews through heaps of email and gives incredible search results. I’d completely skipped any type of filing to folders; it just works.

                                                                                                                        The only reason I’m not using it right now is that we lost the war on HTML email and bottom posting, so I’ve surrendered and gone to Outlook so my coworkers stop asking me why my email comes out funny on their phones.

                                                                                                                        If anyone knows a good set of hooks to make mail-mode transform to “quasi-nice HTML” – turn > into a blockquote, maybe * and _ to <b> and <u>, etc., I’d love to see it.

                                                                                                                        1. 2

                                                                                                                          Hi owen

                                                                                                                          Thanks for bringing up this valid concern. I’ve heard it often before, but personally have not encountered an issue. To share my experience and setup, I quickly created a screencast and a blog post that shows how I work with HTML emails: https://200ok.ch/posts/2020-05-27_using_emacs_and_mu4e_for_emails_even_with_html.html

                                                                                                                          All the best and good email consumption/writing(;

                                                                                                                          Update: And now I’ve just read that you’re talking about the opposite waylol

                                                                                                                          Well, there’s a built-in way to transform Org to HTML, but I haven’t used it: https://github.com/djcb/mu/blob/master/mu4e/org-mu4e.el

                                                                                                                          Update 2: If I understand you correctly, your issue was that people read mails on their phone and that you used fixed with mails (maybe 74 chars). That’s easily fixed by using format flowed. The config for that is a one liner in mu4e.

                                                                                                                          1. 1

                                                                                                                            Thanks for the screencast and blog post: they look like they will still be worth a peek to try to improve my environment! I’ll also have to take a look at org-mu4e as well – funny enough, I used mu4e-org a ton to get emails in my agenda… wish I’d have thought about going the other direction ;-)

                                                                                                                            That’s easily fixed by using format flowed. The config for that is a one liner in mu4e.

                                                                                                                            Two problems:

                                                                                                                            1. f=f just doesn’t work with the MUAs I’m emailing to (Outlook, some Android mail users). 998-wide lines, as in the proposal there, were OK, but:
                                                                                                                            2. Beyond the “funny phone” problem the real problem is that the users I communicate with almost always expect a very specific thing: HTML emails generated by Outlook, period. Plain text emails were never workable because the use of highlights/tables/etc. was essentially stripped out.

                                                                                                                            But I’m definitely going to check out org-mu4e though to see if it fits. It probably won’t take a ton to line it up to look like Outlook…and if I can get close enough it might be worth it.

                                                                                                                            1. 2

                                                                                                                              Thank you for the information that format=flowed by default doesn’t work well with some clients like Outlook. The additional configuration of setting the maximum allowed width seems like a good workaround. I just confirmed that it works with Outlook 365 (in my tests).

                                                                                                                              FWIW, based on my experience, f=f works well at least with Apple Mail and iOS.

                                                                                                                              As for mail threads that let HTML prevail on responding whilst sending plain text, that’s probably going to be a hard problem, indeed. I might be lucky, because in my experience people with HTML mailers don’t start threads. Especially not with Outlook (365) which doesn’t seem to have the capability to quote from the previous mail. And without quoting, longer mail threads become completely unintelligible quite fast. Having said that, I do understand that some people prefer to communicate in this manner all the time anyway^^

                                                                                                                              In any case, all the best, and thank you, again for the additional information on f=f!🙏

                                                                                                                        2. 3

                                                                                                                          Super fast search

                                                                                                                          That is not my experience. Even things like bu (Unread messages) take ~5 seconds. notmuch on the other hand has always been super fast. Maybe I’m doing something wrong?

                                                                                                                          I like the UI of mu4e more fwiw.

                                                                                                                          1. 2

                                                                                                                            bu is near instant for me, as are other queries. I’ve got a mail archive of just short of 40k messages. Maybe you’ve got significantly more?

                                                                                                                            munen@lambda:~% time mu find flag:unread >& /dev/null
                                                                                                                            mu find flag:unread >&/dev/null  0.00s user 0.00s system 94% cpu 0.009 total
                                                                                                                            
                                                                                                                            munen@lambda:~% find Maildir -type f | wc -l
                                                                                                                            39240
                                                                                                                            
                                                                                                                            1. 3

                                                                                                                              I’ve got a mail archive of just short of 40k messages

                                                                                                                              I’ve got around 165K, not significantly more. Calling mu from the CLI is instantaneous. It is when I call it from Emacs that it takes ~5 seconds. The issue is likely on the Emacs. I have a similar experience in the three machines I’ve setup mu4e in.

                                                                                                                              $ time mu find flag:unread >& /dev/null
                                                                                                                              
                                                                                                                              real    0m0.016s
                                                                                                                              user    0m0.006s
                                                                                                                              sys     0m0.010s
                                                                                                                              puercopop@PuercoDesktop:~
                                                                                                                              $ find Maildir -type f | wc -l
                                                                                                                              165508
                                                                                                                              
                                                                                                                              1. 1

                                                                                                                                I don’t have nearly as many mails as you so I’m not certain, but you can try bumping up gc-cons-threshold and read-process-output-max:

                                                                                                                                (setq gc-cons-threshold 100000000
                                                                                                                                      read-process-output-max (* 1024 1024))
                                                                                                                                
                                                                                                                                1. 2

                                                                                                                                  Thanks, that did it. The gc-cons-threshold took it down to 2 seconds and the read-process-output-max to instant. Looks like I only need to update the read-process-output-max

                                                                                                                          2. 1

                                                                                                                            I managed to get this up just yesterday for my work gmail account. I haven’t managed to figure out how to setup multiple gmail accounts by using the recommended contexts just yet. The contexts example is a bit naive and doesn’t talk about separating Maildirs for each account refresh rates etc.

                                                                                                                            So far the look and feel of mu4e is very good I must say.

                                                                                                                          1. 1

                                                                                                                            Well, the website has more than one header I don’t expect. I’m not familiar with the Amazon cloud. Is it normal it adds all these extra stuff?

                                                                                                                            x-amz-server-side-encryption: AES256
                                                                                                                            x-amz-cf-pop: FRA53-C1
                                                                                                                            x-cache: RefreshHit from cloudfront
                                                                                                                            x-amz-cf-pop: FRA53-C1
                                                                                                                            x-amz-cf-id: xmjg0VUglNLu4eimYuTNRCPrnuHnrIsHL1wvmiDBF-MXaWFq1iLKHw=
                                                                                                                            
                                                                                                                            1. 3

                                                                                                                              Not all AWS services do; in this case, the site was served over the Cloudfront CDN. Other services are less noisy.

                                                                                                                              In the spirit of sharing fun header tidbits: the x-and-cf-pop header shows the point of presence that served this request. AWS uses ICAO airport codes, so this came from Frankfurt, Germany. I got an ORD value there so my data came from Chicago.

                                                                                                                            1. 13

                                                                                                                              The RFC explicitly forbids this kind of use, only allowing the lowest identifier to be a wildcard, and only if it is not a public suffix itself.

                                                                                                                              This is very surprising that browsers don’t match on this properly.

                                                                                                                              1. 16

                                                                                                                                While it’s a little easier for you to write “the RFC”, it would be helpful for you to mention which RFC for those of us reading.

                                                                                                                                1. 3

                                                                                                                                  https://tools.ietf.org/html/rfc6125#section-6.4.3 says SHOULD.

                                                                                                                                  What are you talking about?

                                                                                                                                  1. 1

                                                                                                                                    The Certification Authority (CA)/Browser Forum baseline requirements (11.1.3) require that before issuing a wildcard certificate, Certificate Authorities ensure that such a certificate is not issued for entries in the Mozilla PSL, e.g. *.co.uk,or that the entity actually owns the entirety of the public suffix

                                                                                                                                    Please read all sub-threads before posting a reply :)

                                                                                                                                    1. 3

                                                                                                                                      This is an requirement for CA’s, not user agents. This certificate would not be issued by a (public) CA, but it is not invalid for browsers. It is perfectly valid for private CA’s to do this, e.g. so you could MITM all of your workers traffic.

                                                                                                                                  2. 2

                                                                                                                                    Which RFC? How is “public suffix” defined? Does it simply defer to the Public Suffix List?

                                                                                                                                    1. 2

                                                                                                                                      There are two kinds of public suffixes – those defined by ICANN, also included in the public suffix list, and the not really official private definitions in the public suffix list.

                                                                                                                                      And quoting the ICANN advisory on this:

                                                                                                                                      The Certification Authority (CA)/Browser Forum baseline requirements (11.1.3) require that before issuing a wildcard certificate, Certificate Authorities ensure that such a certificate is not issued for entries in the Mozilla PSL, e.g. *.co.uk,or that the entity actually owns the entirety of the public suffix

                                                                                                                                      So while it’s not an RFC, it’s still a standard – and an even stronger at that

                                                                                                                                      1. 3

                                                                                                                                        it’s still a standard – and an even stronger at that

                                                                                                                                        You are confused. That is not a quote from a standard for web browsers or TLS implementations, but for people who want to make a certificate signing authority that CA/B members (like Mozilla, Google, Microsoft, and so on) would include in their web browsers.

                                                                                                                                        There are lots of reasons to make certificates that Mozilla (For example) would not include in the Firefox web browser, and it is required that valid TLS implementations interpret them according to the actual standard where that’s broader than what you’re reading here.

                                                                                                                                        1. 3

                                                                                                                                          Sounds like a political limitation, not a technical limitation. Unless SSL consumers start to enforce this on their end, it wouldn’t prevent a malicious CA from issuing a cert like this that could be used to MITM all traffic.

                                                                                                                                          1. 6

                                                                                                                                            Sounds like a political limitation, not a technical limitation.

                                                                                                                                            That’s the state of web PKI in a single sentence.

                                                                                                                                            1. 4

                                                                                                                                              That’s exactly the point – I was expecting browsers to actually implement this spec and verify this for certificates (as I already do this in a limited way in Quasseldroid)