1. 12

    Recognizing that how and why change at different velocities is useful, but their solution to managing the difference is the worst possible I can imagine. Rather than providing documentation on the system and context where the user is working, this approach requires users to leave that context (e.g., the command line) to query another system, in this case, Slack.

    For detailed use cases, doc doesn’t need to be fully contextual. On Unix-like system that’s the point of additional doc in /usr/local/share/doc. But that’s not what we’re talking about here. The examples in the article are for basic usage:

    Question: “How does one use dbconnect to dev vs prod?”

    Answer: Query Slack.

    What’s the user supposed to do when (not if) Slack has an outage? Wait until it’s up again? That’s not a production-friendly answer.

    Again, recognizing the difference between the why and how of documentation is useful. This solution is not.

    1. 3

      Books and legal documents had timestamps and versioning a century before software existed.

      It’s easy to update and commit software documentation and specifications together with the code. (Tools like Bugs Everywhere tried to do the same for bugs as well but unfortunately the popularity of GitHub killed them)

      Documentation does not go stale magically: it does that when is treated as a second class citizen.

    1. 11

      Making --dry-run the default is very counter intuitive for those already used to how most, it not all, command-line utilities currently work. In Unix and Unix-like operating systems the assumption has, AFAICT, always been to execute the primary function of the utility, assuming the user knows what they want to do. Any deviation from the default is controlled by flags. Many utilities will check an rc file to override the defaults. For those that don’t, there are aliases and shell functions.

      I’m hard pressed to think of many cli utilities that display what they do first. The exceptions are those with an interactive mode, usually invoked with -i. The only utility I can think of that backs off before making a catastrophic change is git push, but that’s because the user’s repository is out of sync with the target repo. The default is always to push.

      That’s not to say that the user community couldn’t swap the default to safety first for functions and utilities that make changes to the system, but it’s a massive shift in terms of training, expectations, and productivity for the broader community at large. If I had to type mv --no-dry-run every time I wanted to rename a file, I’d pretty quickly write a bunch of aliases to invoke them the traditional, and correct (IMHO) way. :D

      That said, I’ve always appreciated rsync’s --dry-run flag. I use it often and many of my utilities offer the same feature. My counter-suggestion to the author (@moshez) would be to write utilities the “traditional way” so the default case is to do whatever the utility was designed to do, and add a -n or --dry-run flag for pre-flight verification, and consider balking only in cases of real conflict a la git.

      1. 3

        I think the trade off here is about the frequency of tool use, and the scope of change a tool can make, or how many people it can affect. With core untils like cp or common tools like tar, which you may execute hundreds of times a day, a —no-dry-run default would be expensive and annoying for no upside - because the annoyance cost would be extremely high, and potential damage is isolated to a single system. But imagine a tool that someone on your team may run weekly or monthly; and that could destroy or re-provision hundreds of machines running mission-critical systems: with that sort of tool, it’s well worth it to be extremely verbose, print debugging info by default, and require special action to enact changes.

        In that case the downside to writing tools “the traditional way” is huge injury. You can argue that one shouldn’t allow the untrained such permission or responsibility to use a tool like that, but I’d rather the have systemic oops protection if I can get it.

        An example of a tool at my job that requires a —no-dry-run flag is a mass refactoring tool that runs a user-defined job, and could open hundreds of PRs on your behalf. Much better to have dry run the default than a new user of the tool spam every project with a nonsense PR. We have other tools that require the env var I_AM_AN_EXPERT_AND_ALSO_RECKLESS set before performing actions against production without confirmation.

        1. 8

          I’d prefer all tools be dangerous or safe, but not a mixture. It’s inconsistency that causes confusion:

          “Hmm…does rsync default to dry-run or no-dry-run?” man rsync

          “Ansible, that’s –no-dry-run, right?” man ansible

          Consistency leads to confidence. Are all command-line utilities consistent? No. But that comes typically from folks deciding their approach is more important than the user interface.

          One of the great geniuses of Apple Human Interface Guidelines in the 80s was the emphasis on consistency and predictability (not that Apple cares about that as much anymore). Users knew where to look and what to expect. The same should be true with cli tools.

          I worked at a shop where one of the sysadmins set “sane defaults” via aliases for all users on the systems he managed. Unfortunately, they were often diametrically opposed to the real defaults. Users would login to systems not under his control and find out the hard way mv -i wasn’t the aliased default on those systems. It led to false confidence because they thought they knew how the utilities worked.

          You can argue that utilities like rm are safer on a single machine than a utility that touches hundreds of machine, but rm -rf on a control server might be worse. I want my users to be as conscientious on a single box as they are on a thousand. The fact of the matter is that broadly speaking, cli utilities do what you ask of them without question or hesitation. Users should be taught to expect that and execute a dry-run if that’s what’s needed.

          I’ll also echo @KevinMGranger’s comment elsewhere in this thread about warning fatigue. Great point!

          1. 1

            I agree with you that changing the defaults for existing tools like mv sounds like a nightmare. Even if you think mv should act differently than it does, silently changing behavior on some machines is begging for accidents on the others…

            But while it may be common for many Unix utilities to offer no warning messages, I agree with @jitl’s point about frequency and scope — tools that you use frequently with limited scope (such as moving files on a single machine) shouldn’t have warnings, because that leads to warning fatigue. But, tools that are both potentially very dangerous and used infrequently probably should require warnings, because an error is painful to recover from and it’s unlikely that you’ve built safe habits around the tool since it’s rarely used.

            Off the top of my head, here are some real-world examples of tools / CLI-based workflows that distinguish between dangerous and regular operations, or prompt you for potentially dangerous ones:

            • Protected branches with git, either by disabling force-pushes to master, or potentially disabling pushes to master entirely. This is common enough that Github built a UI for branch protection; previously you’d need access to the git server itself to install pre-receive hooks. Pushing or force pushing to non-master branches is fine; doing it to master isn’t, since it’s rarely needed or desired (usually you’ll only do that if… someone else accidentally force-pushed to master and you’re trying to clean up).
            • Running git clean requires a -f to actually delete any files (or special configuration in your git config).
            • Even regular old rm has built-in controls. For example, deleting a single file is fine, but to delete an entire directory you must pass -r. And on current versions of GNU rm, it will refuse to delete the root directory unless you pass the special flag --no-preserve-root.
            • Some commands check for UID 0 and fail with an error message if you’re running them as root, and require special configuration or flags to run as root. For example, brew does this, and IMO this is really good practice. Running brew as root will likely mess up whatever you’re trying to install by making the files owned by root (and thus unmanageable by anyone other than root, e.g. brew itself unless invoked with sudo again), it’s an easy mistake for novices to make, and intentionally running brew as root should almost never happen.

            That being said it’s probably no surprise I agree with @jitl, since we sit next to each other at our office and work on many of the same internal tools :)

      1. 12

        In a nutshell, MTA Strict Transport Security (mta-sts) is a mechanism to publish SMTP security policies via DNS and HTTPS. DNS is used to publish an id that is changed whenever the policy changes, and HTTPS to publish the policy itself. The policy covers:

        • whether MTAs sending mail to this domain can expect PKIX- authenticated TLS support

        • what a conforming client should do with messages when TLS cannot be successfully negotiated

        An SMTP security policy is itself a good idea. It solves problem of a MITM stripping the STARTTLS option during initial negotiation, or at least warns the sending MTA that there’s something wrong and what to do if a TLS sessions can’t be created.


        It’s the requirement to publish the policy via https I find at best odd. I don’t see a good justification in the RFC for this “mechanism” to require a web server in addition to DNS. Yes, I get it. DNS records can only be so long, yet we have protocols like SPF and DKIM that function just fine without a web server.

        In the RFC, MTA-STS is described as an alternative to DNSSEC, but it doesn’t solve the problem of unsigned DNS records. While the policy itself is served via https from a web server that must present a valid certificate, the mechanism still depends on DNS. The policy is published at a well-known location, at a well-known Policy-Host name resolved by DNS. So you have two required components of the mechanism that require DNS.

        There’s no way to implement this mechanism without DNS look-ups, so it’s not a secure alternative to DNSSEC. It’s just an alternative.

        One might think speed and efficiency are the reason. Perhaps https (or more likely http2) are faster than DNS, but like HSTS, the policy is meant to be cached. Policy invalidation is handled by DNS look-ups that return the Policy ID which indicates the current policy is no longer current. Trading speed (assuming http/2 are faster than DNS) for a one-time look-up and then going back to DNS for invalidation look-ups makes little sense.

        Lastly, it deviates from other SMTP policy mechanisms like SPF, DKIM, and DMARC which all rely on DNS. That alone is a strong argument for not adding a web server to host the policy.


        I’m left scratching my head. Requiring a web server to host the MTA-STS policy seems like a gratuitous addition, using two protocols to serve the purpose of one. And worse, complicating SMTP daemons everywhere. To implement this mechanism, SMTP daemons can no longer rely on the OS resolver. They need a minimal https client. More code paths. More opportunities for failure.

        Further Reading: Introducing MTA Strict Transport Security (MTA-STS)

        1. 2

          Oh, yeesh. I’ve only followed development of STS to the extent I knew it was a thing. No idea about the web server aspect. That’s weird. Good commentary.

          1. 1

            Looks like what they’re going for is being secure against spoofing the policy, even if it’s not possible to protect against completely stripping it (when not cached). Yeah, a bit weird – if someone mitms DNS, removing the header would be enough.

            1. 1

              The reason HTTPS is used is that the policy is validated with a TLS certificate. The whole thing has been introduced to have an alternative to DNSSEC-based mechanisms, because DNSSEC exists since more than a decade and hasn’t been adopted.

              Many people who are used to how things are done in E-Mail and DNS scratch their heads and don’t understand MTA-STS, because it’s doing things differently. They use HTTPS because HTTPS works and is successful. It is a good idea to use a security mechanism that works and build something on top of it.

              1. 3

                I think the question is more about the http part of https and not the s part.

            1. 2

              I just switched to OpenBSD for e-mail using the following stack:

              Inbound: opensmtpd -> spampd(tag) -> opensmtpd -> clamsmtpd(tag) -> opensmtpd -> procmail -> dovecot(Maildir) outbound: opensmtpd -> dkim_proxy -> opensmtpd(relay)

              I don’t use the spamd/grey listing up front like a lot of tutorials suggest, but spampd(spam assistant) seems to get the majority of it.

              My old stack was similar, but used postfix on opensuse. I really like the opensmtpd configuration; loads simpler than postfix. However I wish it supporter filters that the other MTAs do. It had filter support for a bit, but was clunky and subsequently removed. It makes it difficult (impossible?) to run things like rspam.

              1. 5

                rspamd has an MDA mode, so you can do like

                accept from any for local virtual { "@" => mike } deliver to mda "rspamc --mime --ucl --exec /usr/loca
                l/bin/dovecot-lda-mike" as mike

                and dovecot-lda-mike is

                #! /bin/sh
                exec /usr/local/libexec/dovecot/dovecot-lda -d mike

                smtpd is really really really good. For some reason the email software ecosystem is a mess of insane configs and horrible scripts, but my smtpd.conf is 12 lines and the only script I use (that rspamd one) is going to go away when filters come back. smtpd is so good I went with an MDA instead of a web app to handle photo uploads to my VPS. It’s one line in smtpd.conf and ~70 lines of python, and I don’t have to deal with fcgi or anything like that.

                1. 1

                  smtpd is so good I went with an MDA instead of a web app to handle photo uploads to my VPS

                  Oh that’s a clever idea. I’ve been using ssh (via termux) on my phone but that is so clumsy.

                2. 5

                  I do greylisting on my email server [1] and I’ve found that it reduces the incoming email by 50% up front—there are a lot of poorly written spam bots out there. Greylisting up front will reduce the load that your spam system will have to slog through, for very little cost.

                  [1] Yes, I run my own. Been doing it nearly 20 years now (well over 10 at its current location) so I have it easier than someone starting out now. Clean IP, full control over DNS (I run my own DNS server; I also have access to modify the PTR record if I need to) and it’s just me—no one else receives email from my server.

                  1. 2

                    I’m the author/presenter of the tutorial. If I may, I suggest looking at my talk this year at BSDCan: Fighting Spam at the Frontline: Using DNS, Log Files and Other Tools in the Fight Against Spam. In those slides I talk about using spf records (spf_fetch, smtpctl spfwalk, spfwalk standalone) to whitelist IPs and mining httpd and sshd logs for bad actors and actively blacklisting them.

                    For those who find blacklisting a terrifying idea, in the presentation I suggest configuring your firewall rules so that your whitelists always win. That way, if Google somehow get added to your blacklists, the whitelist rule will ensure Gmail can still connect.

                    I also discuss ways to capture send-to domains and add them to your whitelists so you don’t have to wait hours for them to escape the greylists.

                    1. 1

                      I didn’t find SPF to be all that great, and it was the nearly the same three years earlier. Even the RBL were problematic, but that was three years ago.

                      As for greylisting, I currently hold them for 25 minutes, and that might be 20 minutes longer than absolutely required.

                    2. 1

                      Greylisting is the best. Back when my mailserver was just on a VPS it was the difference between spamd eating 100% CPU and a usable system.

                  1. 10

                    If you want proper as in “canonical”, John Gruber’s original is your best choice. It is effectively the definition of markdown. But markdown has moved on beyond Gruber’s original vision, mostly without him. Still, his original vision in collaboration with Aaron Swartz is still very usable, especially for blogging.

                    Once you move beyond the original Perl code, you have numerous choices. Here are a few:

                    And there’s no shortage of online converters and other versions.

                    I’ve used pandoc for years. It doesn’t just convert markdown to html. It converts from multiple formats to multiple formats, including various slide formats as well as PDF. It supports Python and Haskell scripts, as well as Lua scripting. The Lua scripting is very interesting because it manipulates the AST in memory as opposed to Python and Haskell where pandoc marshalls the AST to a JSON representation, pipes it to the script, and then receives the output.

                    Lowdown is interesting because it’s a fork of hoedown. @kristaps went through it and added a proper AST representation internally as well as pledge and privilege separating it. He doesn’t expose the AST (yet?), but it’s fast and efficient.

                    Multimarkdown is a good choice if you want to stay in Markdown but target more than one output format. Like pandoc, mmd can export to several slide formats and pdf by way of LaTeX.

                    Lastly, the lua and awk versions are noted above because I like little languages that do so much. ;-)

                    For most of what markdown was originally designed for, text files converted to HTML, you almost can’t go wrong. Where you’re going to find your choices both expanded and somewhat complicated is when you move beyond basic markdown and start trying to write slides, documentation, books, or use add-on features like tables.

                    I’d suggest start with basic markdown as described on Gruber’s Markdown page, pick one of the simpler converters like his Perl script, and see whether you need anything beyond that. Then look at Pandoc and Multimarkdown. If you don’t need more features but want more speed, then look at Lowdown.

                    1. 2

                      lowdown does have its library exposed! pkg_add lowdown, man 3 lowdown.

                      1. 1

                        Thanks for the detailed answer!

                      1. 3

                        I like it. It makes reading comments super fast. One minor issue, though.

                        I’m using the gopher+ distribution (gopher-3.0.11p2). When I open the link I see “Page 1/4”. As I page down (or jump to the next page using “>”) I get to 3/4 and then it wraps back around to 1/4. I never see “Page 4/4”. Not a major problem, but paging works on other gopher sites.

                        Thanks for making it!

                        BTW, for anyone looking for a client, I find plain gopher has saner navigation than lynx. And less color! \o/

                        1. 21

                          Another remote access app may be welcome, but the claims are exuberant, especially when comparing itself to a codebase and specification that’s been around a long time and proved it’s worth and mettle.

                          What Is It?

                          It’s like SSH, but more secure, and with cool modern features. It is not an implementation of SSH, it is a new, modern protocol.

                          Does Oxy have…

                          + Years of testing and battle hardening? No, it's super green. But hey, if you try it you'll help make it less green!

                          Questions I’d like to see answered:

                          • Who designed it?
                          • Who implemented it?
                          • Who reviewed the design and implementation?
                          • Who evaluated the use and implementation of the crypto?
                          • How was it tested to be shown “more secure”?

                          The site links a protocol specification. Excellent. But that’s just the beginning.

                          Curious to know the same about OpenSSH, see the specifications page? Want to know more about the project and where it came from, see the home and history pages.

                          Oxy is a new project and it takes time to build out a project site. Keep working on both the app and the site. But seriously, don’t expect security-minded folks to believe it’s “like SSH, but more secure” based on assertion. Deliver some serious smack-down proof. And then do it long enough to prove the project has the chops and the commitment to do it day in and day out for decades.

                          Security is about more than claims. Until we see more evidence, I’ll watch Oxy and test it, but I’m not dropping ssh and nearly 20 years of demonstrated commitment to the principles and procedures for delivering secure software.

                          1. 9

                            This is a really timely post. Yesterday after reading @garybernhardt’s revision of Linus’ email, I went down the rabbit hole of type punning, strict aliasing, and undefined behavior. Here are some more helpful articles:

                            John Regehr’s articles are always informative, as are Lattner’s. Cardelli’s is really helpful as far as understanding the value of type systems, how to think about them, and what they can and cannot do for you.

                            The article @GeoffWozniak posted here has helpful links in the footnotes as well.

                            1. 2

                              Cardelli also worked on Modula-3 systems language. It has one of best balances Ive seen of features vs simplicity along with fast compiles, safe by default, and concurrency support. It also had first, standard library with partial, formal verification using the precurser to Why3.

                              All it needed was macros to be nearly perfect alternative to C++. Well, maybe C syntax and some compatibility given how important that turned out. I consider that mandatory these days if aiming for C or C++ crowds.

                            1. 9

                              What Article 13 seemingly overlooks is the place of fair use/fair dealing. How does one critique a work when one can’t display it without a license? Yes, there’s an appeal process … but who’s the arbiter? How long does it take to get approval?

                              What about transformative fair use? Is there a place in Article 13 to build upon the work of others?

                              Yes, I understand we Americans have an expansive view of fair use, but Article 13 is too restrictive. It’s especially so when you consider how long and broad copyrights are. There is a very real need for balance between the rights of authors and those of the public on whose behalf governments extend copyright.

                              And yes, again, I understand Americans view copyright as a mere grant of monopoly by government where in other domains it’s a recognition of droit moral. It’s complicated. But it’s not as simple as the music industry would see it.

                              “This is a strong and unambiguous message sent by the European Parliament,” said executive chair Helen Smith.

                              “It clarifies what the music sector has been saying for years: if you are in the business of distributing music or other creative works, you need a licence, clear and simple. It’s time for the digital market to catch up with progress.”

                              1. 4

                                The thing is also that the content-filtering obligation kicks in in the absence of a license. This is why the supposed target of this legislation (YouTube/Google) has lobbied for it, because that’s exactly what they want – not having to change anything. They already have Content ID, which means they won’t need a licensing deal with the rightholders, while everyone around them will have to either make a deal or develop/license their own Content ID.

                              1. 12

                                I like being able to toggle SMT without a visit to the BIOS. Very cool! Works without a reboot:

                                $ sysctl hw.smt
                                $ ls -laR /
                                $ echo in another terminal
                                $ top
                                load averages:  0.25,  0.16,  0.09                       openbsd 08:32:39
                                64 processes: 62 idle, 2 on processor                                            up  0:09
                                CPU0 states: 59.1% user,  0.0% nice, 17.4% sys,  4.2% spin,  1.6% intr, 17.8% idle
                                CPU1 states:  0.0% user,  0.0% nice,  0.0% sys,  0.0% spin,  0.0% intr,  100% idle
                                CPU2 states: 49.7% user,  0.0% nice, 21.4% sys,  4.8% spin,  0.0% intr, 24.2% idle
                                CPU3 states:  0.0% user,  0.0% nice,  0.0% sys,  0.0% spin,  0.0% intr,  100% idle
                                Memory: Real: 465M/1302M act/tot Free: 6268M Cache: 477M Swap: 0K/8129M
                                $ doas sysctl hw.smt=1
                                hw.smt: 0 -> 1
                                $ ls -laR /
                                $ echo in another terminal
                                $ top
                                load averages:  0.16,  0.17,  0.10                       OpenBSD 08:33:49
                                64 processes: 61 idle, 3 on processor                                            up  0:10
                                CPU0 states: 19.0% user,  0.0% nice, 10.4% sys,  5.4% spin,  1.6% intr, 63.7% idle
                                CPU1 states: 15.2% user,  0.0% nice, 10.0% sys,  6.8% spin,  0.0% intr, 68.1% idle
                                CPU2 states: 25.3% user,  0.0% nice, 12.4% sys,  5.4% spin,  0.0% intr, 56.9% idle
                                CPU3 states: 21.0% user,  0.0% nice, 14.0% sys,  6.0% spin,  0.0% intr, 59.1% idle
                                Memory: Real: 464M/1311M act/tot Free: 6259M Cache: 477M Swap: 0K/8129M
                                1. 0

                                  So far I’ve only found one solution that is actually robust. Which is to manually check that the value is not nil before actually using it.

                                  This seems reasonable to me. If anything, I’d consider knowing how and when to use this kind of check a part of language competency knowledge as it is how Go was designed.

                                  1. 9

                                    We expect people to be competent enough to not crash their cars, but we still put seatbelts in.

                                    That’s perhaps a bad analogy, because most people would say that there are scenarios where you being involved in a car crash wasn’t your fault. (My former driver’s ed teacher would disagree, but that’s another post.) However, the point remains that mistakes happen, and can remain undiscovered for a disturbingly long period of time. Putting it all down to competence is counter to what we’ve learned about what happens with software projects, whether we want it to happen or not.

                                    1. 9

                                      I wish more languages had patterns. Haskell example:

                                      data Named = Named {Name :: Text} deriving Show
                                      greeting :: Maybe Named -> Text
                                      greeting (Just thing) = "Hello " + (Name thing)
                                      greeting _ = ""

                                      You still have to implement each pattern, but it’s so much easier, especially since the compiler will warn you when you miss one.

                                      1. 3

                                        Swift does this well with Optionals

                                        1. 5

                                          You can even use an optional type in C++. It’s been a part of the Boost library for a while and was added to the language itself in C++17.

                                          1. 4

                                            You can do anything in C++ but most libraries and people don’t. The point is to make these features integral.

                                            1. 1

                                              It’s in the standard library now so I think it’s integral.

                                              1. 4

                                                If it’s not returned as a rule and not as an exception throughout the standard library it doesn’t matter though. C++, both the stdlib and the wider ecosystem, rely primarily on error handling outside of the type-system, as do many languages with even more integrated Maybe types

                                          2. 2

                                            Yep. Swift has nil, and by default no type can hold a nil. You have to annotate them with ? (or ! if you just don’t care, see below).

                                            var x: Int = nil // error
                                            var x: Int? = nil // ok

                                            It’s unwrapped with either if let or guard let

                                            if let unwrapped_x = x {
                                                print("x is \(x)") 
                                            } else {
                                                print("x was nil")
                                            guard let unwrapped_x = x else {
                                                print("x was nil")

                                            Guard expects that you leave the surrounding block if the check fails.

                                            You can also force the unwraps with !.

                                            let x_str = "3"
                                            let x = Int(x_str)! // would crash at run-time if the conversion wouldn't succeed

                                            Then there’s implicit unwraps, which are pretty much like Java objects in the sense that if the object is nil when you try to use it, you get a run-time crash.

                                            let x: Int! = nil
                                        2. 7

                                          Hey, I’m the author of the post. And indeed that does work, which is why I’m doing that currently. However, like I try to explain further in the post this has quite some downsides. The main one is that it can be easily forgotten. The worst part of which is that if you did forget, you will likely find out only by a runtime panic. Which if you have some bad luck will occur in production. The point I try to make is that it would be nice to have this be a compile time failure.

                                          1. 1

                                            Sure, and that point came across. I think you’d agree that language shortcomings - and certainly this one - are generally excused (by the language itself) by what I mentioned?

                                        1. 6

                                          This news caused the public release for XSA-267 / CVE-2018-3665 (Speculative register leakage from lazy FPU context switching) to be moved to today.

                                          1. 16

                                            These embargoed and NDA’d vulnerabilities need to die. The system is broken.

                                            edit: Looks like cperciva of FreeBSD wrote a working exploit and then emailed Intel and demanded they end embargo ASAP https://twitter.com/cperciva/status/1007010583244230656?s=21

                                            1. 8

                                              Prgmr.com is on the pre-disclosure list for Xen. When a vulnerability is discovered, and the discoverer uses the responsible disclosure process, and the process works, we’re given time to patch our hosts before the vulnerability is disclosed to the public. On balance I believe participating in the responsible disclosure process is better for my customers.

                                              Pre-disclosure gives us time to build new packages, run through our testing process, and let our users know we’ll be performing maintenance. Last year we found a showstopping bug during a pre-disclosure period: it takes time and effort to verify a patch can go to production. With full disclosure, we would have the do so reactively, with significantly more time pressure. That would lead to more mistakes and lower quality fixes.

                                              1. 2

                                                This is a bad response to the issue. The bad guys probably already have knowledge of it and can use it. A few players deemed important should not get advanced notification.

                                                1. 15

                                                  Prgmr.com qualifies for being on the Xen pre-disclosure list by a) being a vendor of a Xen-based system b) willing and able to maintain confidentiality and c) asking. We’re one of 6 dozen organizations on that list–the criteria for membership is technical and needs-based.

                                                  If you discover a vulnerability you are not obligated to use responsible disclosure. If you run Xen you are not obligated to participate in the pre-disclosure list. The process consists of voluntary coordination to discover, report, and resolve security issues. It is for the people and organizations with a shared goal: removing security defects from computer systems.

                                                  By maintaining confidentiality we are given the ability, and usually the means to have security issues resolved before they are announced. Our customers benefit via reduced exposure to these bugs. The act of keeping information temporarily confidential provides that reduced exposure.

                                                  You have described a voluntary process with articulable benefits as “needing to die,” along with my response being “bad.” As far as I can tell from your comments you claim “the system is broken” because some people “should not get advanced notice.” I’ve described what I do with that knowledge, and why it benefits my users. I’m thankful the security community tells me when my users are vulnerable and works with me to make them safer.

                                                  Can you improve this process for us? Have I misunderstood you?

                                                  1. 11

                                                    Some bad guys might already have knowledge of it. Once it’s been disclosed, many bad guys definitely have knowledge of it, and they can deploy exploits far, far faster than maintainers, administrators and users can deploy fixes.

                                                    1. 8

                                                      You’re treating “the bad guys” like they’re all one thing. In actuality, there’s a string of bad guys from people who will use a free, attack tool to people who will pay a few grand for one to people who can customize a kit if it’s just a sploit to people who can build a sploit from a description to rare people who had it already. There’s also a range in intent of attackers from DOS to data integrity to leaking secrets. The folks who had it already often just leak secrets in stealthy way instead of do actual damage. The also use the secrets in a limited way compared to average, black hat. They’re always weighing use vs detection of their access.

                                                      The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.

                                                      1. 4

                                                        The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.

                                                        I believe the process is so effective at shutting down “quite a range of attackers” that it works despite: a) accidental leaks [need for improvement of process] b) intentional leaks [abuse] c) black hats on the pre-disclosure list reverse engineering an exploit from a patch. [fraud] In aggregate, the benefit from following the process exceeds the gain a black hat would have from subverting it.

                                                  2. 9

                                                    Well, it’s complicated. (Disclosure: we were under the embargo.)

                                                    When a microprocessor has a vulnerability of this nature, those who write operating systems (or worse, provide them to others!) need time to implement and test a fix. I think Intel was actually doing an admirable job, honestly – and we were fighting for them to broaden their disclosure to other operating systems that didn’t have clear corporate or foundation backing (e.g., OpenBSD, Dragonfly, NetBSD, etc). That discussion was ongoing when OpenBSD caught wind of this – presumably because someone who was embargoed felt that OpenBSD deserved to know – and then fixed it in the worst possible way. (Namely, by snarkily indicating that it was to address a CPU vulnerability.) This was then compounded by Theo’s caustic presentation at BSDCan, which was honestly irresponsible: he clearly didn’t pull eager FPU out of thin air (“post-Spectre rumors”), and should have considered himself part of the embargo in spirit if not in letter.

                                                    For myself, I will continue to advocate that Intel broaden their disclosure to include more operating systems – but if those endeavoring to write those systems refuse to honor the necessary secrecy that responsible disclosure demands (and yes, this means “embargoed and NDA’d vulnerabilities”), they will make such inclusion impossible.

                                                    1. 18

                                                      We could also argue Theo’s talk was helpful in that the CVE was finally made public.

                                                      Colin Percival tweeted in his thread overview about the vulnerability that he learned enough from Theo’s talk to write an exploit in 5 hours.

                                                      If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?

                                                      Theo alone knows whether he picked-up eager FPU from developers under NDA. Even if he did, there’s zero possibility outside of the law he lives under (or contracts he might’ve signed) that he’s part of the embargo. As to the “spirit” of the embargo, his decision to discuss what he knew might hurt him or OpenBSD in the future. That was his call to make. He made it.

                                                      Lastly, I was at Theo’s talk. Caustic is not how I would describe it, nor would I categorize it as irresponsible. Theo was frustrated that OpenBSD developers who had contributed meaningfully to Spectre and Meltdown mitigation had been excluded. He vented some of that frustration in the talk. I’ve heard more (and harsher) venting about Linux in a 30 minute podcast than all the venting in Theo’s talk.

                                                      On the whole Theo’s talk was interesting and informative, with a sideshow of drama. And it may have been what was needed to get the vulnerability disclosed and more systems patched.

                                                      Disclosure: I’m an OpenBSD user, occasional port submitter, BSDCan speaker and workshop tutor, FreeNAS user and recommender, and have enjoyed many podcasts, some of which may have included venting.

                                                      1. 4

                                                        If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?

                                                        It was clear to me the day Spectre / Meltdown were disclosed that there would be future additional vulnerabilities of the same class based on that discovery. I think there is circumstantial evidence suggesting the discovery was productive for the people who knew about it in the second half of 2017 before it was publicly disclosed. One can safely assume black hats have had the ability to find and use novel variations in this class of vulnerability for at least six months.

                                                        If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.

                                                        1. 4

                                                          If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.

                                                          I have absolutely no idea what point you’re trying to make. Certainly, everyone under the embargo knew that this would be easy to exploit; in that regard, Theo showed people what they already knew. The only new information here is that Theo is every bit as irresponsible as his detractors have claimed – and those detractors would (of course) point out that that information is not new at all…

                                                          1. 1

                                                            With respect, how is Theo irresponsible for reducing the time the users of his OS are vulnerable?

                                                            Like, the embargo thing sounds a lot to the ill-informed like some kind of super-secret clubhouse.

                                                        2. 4

                                                          Theo definitely wasn’t part of the embargo, but it’s also unquestionable that Theo was relying on information that came (ultimately) from someone who was under the embargo. OpenBSD either obtained that information via espionage or via someone trying to help OpenBSD out; either way, what Theo did was emphatically irresponsible. Of course, it was ultimately his call – but he is not the only user of OpenBSD, and is unfortunate that he has effectively elected to isolate the community to serve his own narcissism.

                                                          As for the conjecture that Theo served any helpful role here: sorry, that’s false. (Again, I was under the embargo.) The CVE was absolutely going public; all Theo did was marginally accelerate the timeline, which in turn has resulted in systems not being as prepared as they otherwise could be. At the same time, his irresponsible behavior has made it much more difficult for those of us who were advocating for broader inclusion – and unfortunately it will be the OpenBSD community that suffers the ramifications of any future limited disclosure.

                                                          1. 6

                                                            Espionage? You’re suggesting one of:

                                                            1. Someone stole the exploit information, leaked it to the OpenBSD team, a team known for proactively securing their code, on the off-chance Theo would then further leak it (likely with mitigation code), causing the embargoed details to be released sooner than expected,

                                                            2. OpenBSD developers stole the exploit information, then leaked it (while committing mitigation code), causing the embargoed details to be released sooner than expected.

                                                            The first doesn’t seem plausible. The second isn’t worthy of you or any of the developers on the OpenBSD team.

                                                            I’m sure you’ve read Colin’s thread. He contacted folks under embargo after he wrote his exploit code based on Theo’s presentation. The release timeline moved forward. OSs that had no knowledge of the vulnerability now have patches in place. Perhaps those users view “helpful” in a different light.

                                                            Edit: Still boggling over the espionage comment. Had to flesh that out more.

                                                            1. 8

                                                              Theo has replied:

                                                              In some forums, Bryan Cantrill is crafting a fiction.

                                                              He is saying the FPU problem (and other problems) were received as a leak.

                                                              He is not being truthful, inventing a storyline, and has not asked me for the facts.

                                                              This was discovered by guessing Intel made a mistake.

                                                              We are doing the best for OpenBSD. Our commit is best effort for our user community when Intel didn’t reply to mails asking for us to be included. But we were not included, there was no reply. End of story. That leaves us to figure things out ourselves.

                                                              Bryan is just upset we guessed right. It is called science.

                                                              He’s also offered to discuss the details with Bryan by phone.

                                                              1. 4

                                                                Intel still has 7 more mistakes in the Embargo Execution Pipeline™️ according to a report^Wspeculation by Heise on May 3rd.


                                                                Let the games begin! 🍿

                                                                1. 1

                                                                  What’s (far) more likely: that Theo coincidentally guessed now, or that he received a hint from someone else? Add Theo’s history, and his case is even weaker.

                                                                  1. 13

                                                                    While everyone is talking about Theo, the smart guys figuring this stuff out are Philip Guenther and Mike Larkin. Meet them over beer and discuss topics like ACPI, VMM, and Meltdown with them and you won’t doubt anymore that they can figure this stuff out.

                                                                    1. 6

                                                                      In another reply you claim your approach is applied Bayesian reasoning, so let’s go with that.

                                                                      Which is more likely:

                                                                      1. A group of people skilled in the art, who read the relevant literature, have contributed meaningful patches to their own OS kernel and helped others with theirs, knowing that others besides themselves suspected there were other similar issues, took all that skill, experience and knowledge, and found the issue,


                                                                      1. Theo lied.

                                                                      Show me the observed distribution you based your assessment on. Show me all the times Theo lied about how he came to know something.

                                                                      Absent meaningful data, I’ll go with team of smart people knowing their business.

                                                                      1. 4

                                                                        Absent meaningful data

                                                                        Your “meaningful data” is 11 minutes and 5 seconds into Theo’s BSDCan talk: “We heard a rumor that this is broken.” That is not guessing and that is not science – that is (somehow) coming into undisclosed information, putting some reasonable inferences around it and then irresponsibly sharing those inferences. But at the root is the undisclosed information. And to be clear, I am not accusing Theo of lying; I am accusing him of acting irresponsibly with respect to the information that came into his possession.

                                                                        1. 3

                                                                          Here is at least one developer’s comment on the matter. He points to the heise.de article about Spectre-NG as an example of the rumors that were floating around. That article is a long way from “lazy FPU is broken”.

                                                                          Theo has offered to discuss your concerns, what you think you know, what he knew, when and how. He’s made a good-faith effort to get his cellphone number to you. If you don’t have it, ask.

                                                                          If you do have his number, call him. Ask him what he meant by “We heard a rumor that this is broken.” Ask him what rumor they heard. Ask him whether he was referring to the Spectre-NG article.

                                                                          Seriously, how hard does this have to be? You engaged productively with me when I called you out. You’ve called Theo out. Talk to him.

                                                                          And yes, I get it. Your chief criticism at this point is responsible disclosure. But as witnessed by the broader discussion in the security community, there’s no single agreed-upon solution.

                                                                          While you’ve got Theo on the phone you can discuss responsible disclosure. Frankly, I suggest beer for that part of the discussion.

                                                                          Edit: Clarify that Florian wasn’t saying he knew heise.de were the source.

                                                                        2. 0

                                                                          Reread the second sentence in my reply you linked.

                                                                        3. 2

                                                                          This is plain libel, pure and simple.

                                                                          1. -2

                                                                            It is Bayesian reasoning, pure and simple.

                                                                            That said, this is a tempest in a teacup, so call it whatever you want; I’m gonna go floss my cat.

                                                                      2. 6

                                                                        Sorry – I’m not accusing anyone of espionage; apologies if I came across that way.

                                                                        What I am saying is that however Theo obtained information – and indeed, even if that information didn’t originate with the leak but rather by “guessing” as he is now apparently claiming – how he handled it was not responsible. And I am also saying that Theo’s irresponsibility has made the job of including OpenBSD more difficult.

                                                                        1. 9

                                                                          The spectre paper made it abundantly clear that addtional side channels will be found in the speculative execution design.

                                                                          This FPU problem is just one additonal bug of this kind. What I’d like to learn from you is:

                                                                          1. What was the original planned public disclosure date before it was moved ahead to today?

                                                                          2. Do you really expect that a process with long embargo windows has a chance of working for future spectre-style bugs when a lot of research is now happening in parallel on this class of bugs?

                                                                          1. 5
                                                                            1. The original date for CVE-2018-3665 was July 10th. After the OpenBSD commit, there was preparation for an earlier disclosure. After Theo’s talk and after Colin developed his POC, the date was moved in from July 10th to June 26th, with preparations being made to go much earlier as needed. After the media attention today, the determination was made that the embargo was having little effect and that there was no point in further delay.

                                                                            2. Yes, I expect that long embargo windows can work with Spectre-style bugs. Researchers have been responsible and very accommodating of the acute challenges of multi-party disclosure when those parties include potentially hypervisors, operating systems and higher-level runtimes.

                                                                            1. 10

                                                                              Thanks for disclosing the date. I must say I am happy that my systems are already patched now, rather than in one month from now.

                                                                              I’ll add that some new patches with the goal of mitigating spectre-class bugs are being developed in public without any coordinated disclosure:



                                                                          2. 5

                                                                            Thanks for the clarification.

                                                                            I don’t think early disclosure is always irresponsible (the details of what and when matter). Others think it’s never irresponsible; and some that it’s always irresponsible. Good arguments can be made for each position that reasonable people can disagree about and debate.

                                                                            One thing I hope we can all agree on is that we need clear rules for how embargoes work (probably by industry). We need clear, public criteria covering who, what, when and how long. And how to get in the program, ideally with little or no cost.

                                                                            It’s a given that large companies like Microsoft will be involved. Open-source representatives should have a seat at the table as well. But “open source” can’t just mean Red Hat and a few large foundations. OSs like OpenBSD have a presence in the ecosystem. We can’t just write the rules with a “You must be this high to ride” sign at the door.

                                                                            And yeah, Theo’s talk might make this more difficult going forward. Hopefully both sides will use this event as an opportunity to open a dialog and discuss working together.

                                                                            1. 6

                                                                              Right, I completely agree: I’m the person that’s been advocating for that. I was furious with Intel over Spectre/Meltdown (despite our significant exposure, we learned about it when everyone else did), and I was very grateful for the work that OpenBSD and illumos did together to implement KPTI. This time around, I was working from inside the embargo to get OpenBSD included. We hadn’t been able to get to where we needed to get, but I also felt that progress was being made – and I remained optimistic that we could get OpenBSD disclosure under embargo.

                                                                              All of this is why I’m so frustrated: the way Theo has done this has made it much more difficult to advocate this position – it has strengthened the argument of those who believe that OpenBSD should not be included because they cannot be trusted. And that, in my opinion, is a shame.

                                                                              1. 11

                                                                                Look at it from OpenBSD’s perspective though. They (apparently) tried emailing Intel to find out more, and were told “no”. What were they supposed to do? Just wait on the hope that someone, somewhere, was lobbying on their behalf to be included, with no knowledge of that lobbying?

                                                              1. 4

                                                                One thing that puts me off from using kore is the degree of magic that seems to come with being a “framework”. No main function. An obscure documentation system. A baked-in build system. A baked-in web server. That’s a lot of duplication of mature, well-known tools—make, httpd, and so on—with none of them being trivial at all. I guess C to me is synonymous with UNIX, and frameworks like this go against what UNIX means: manpages, doing a small thing well and fitting into a larger framework, etc. It would be nice, all that being said, to split these tools apart and use them separately. (A well-written HTTP server library would be very handy—and there are a lot of questionable ones out there.)

                                                                1. 5

                                                                  Kore author here.

                                                                  You must have used Kore ages ago. The build tool and web server are two separate things and have been for a while.

                                                                  If you don’t want to use the build tool to help you get started, automatically properly build your app or have any benefits from it, you can roll your own Makefiles. The applications are normal dso files mapped into the server its address space by dlopen() anyway. Those aren’t magic.

                                                                  Not having a main function sort of goes hand in hand with the fact your apps are dso’s.

                                                                  Yes there are more things the build tool can help you with, like injecting assets or building a single binary out of your application instead of a dso.

                                                                  I fully agree that the 2 year old documentation is shit, and that’s something I’m fixing for the next release :)

                                                                  1. 3

                                                                    Not to mention writing a web app in C sounds like a masochistic security mine field due to all the string processing it normally entails. C++ wouldn’t be as bad with std::string, but even then it sounds dreadful.

                                                                    So it’s not for me, but it still looks like a solid project, and it’s filling an interesting niche.

                                                                    1. 6

                                                                      I don’t think kristapsdz got the memo about not writing web apps in C. https://learnbchs.org/ ;-)

                                                                  1. 4

                                                                    This looks very promising. As soon as I saw it I thought “plan9”, and sure enough, it’s somewhat related, using devdraw from plan9port.

                                                                    The widget set is complete enough to create functioning apps. The developer has created several demo apps, including a local mail reader, acme clone, and a simple database browser and query tool.

                                                                    And it’s pure Go.

                                                                    Very promising, indeed.

                                                                    1. 5

                                                                      I run OpenBSD as my daily driver and have for years. I’m a developer and a bit of a minimalist, preferring text-oriented tools, and lightweight window managers like cwm, xmonad, or rio. But that doesn’t mean it can’t be used by folks who want the “full” desktop experience if that’s what you’re looking for (Gnome, KDE, Xfce, Lumina).

                                                                      You’ll find approximately the same selection of open-source apps you run on Linux are available on OpenBSD (and the other BSDs). The edge cases in my experience are Linux-only ports (MS SQL Server), Linux api/kernel-dependent projects like Docker (FreeBSD has jails, a superior choice anyway), and commercial software (e.g., games, Skype, Dropbox). And strange abominations BSD users would rather avoid, like systemd.

                                                                      You also won’t find apps like VirtualBox, but you will find good alternatives like vmm(4) on OpenBSD, and bhyve on FreeBSD.


                                                                      Going on to OpenBSD-specific recommendations, a lot of legacy hardware (x86, amd64, SPARC, and PowerPC) still works on OpenBSD. And a lot of very current hardware as well (x86, amd64, arm, SPARC64, MIPS and other). x86 and amd64 are probably the best options for desktop/laptop use.

                                                                      Before getting to how my configuration and use, I’ll mention some gaps you might find troubling:

                                                                      • file systems (ffs, pick one) (ext2, dos, or fuse are available for interoperability)
                                                                      • GPUs (most will work, but you won’t be doing any CUDA on OpenBSD)

                                                                      My Laptop

                                                                      Currently I run OpenBSD on a Thinkpad T450s. Both wired and wireless (em(4), iwm(4)) work flawlessly.

                                                                      External monitors work very well with xrandr. At a previous job I had a 4k monitor. I ran it alone and together with the internal monitor without issue, though you’ll not be gaming on a 4k monitor at 60 fps with the Intel 5500 series.

                                                                      Suspend/resume works. I don’t use hibernate much, but it worked when I tried it.

                                                                      As far as I can tell all the sensors work. Battery status and charge remaining work.

                                                                      I have a docking station. When I dock/undock the switching is seamless. As with any OS, you have to connect/disconnect some devices during the process (e.g., USB disks … could be somewhat automated). Mice, keyboards and many peripherals connect/disconnect without issues. To manage switching between wired/wireless networking I wrote a script run by apm(8).

                                                                      Software (apps I use)

                                                                      • backups and syncing (rsync and unison)
                                                                      • cli (xterm, ksh, tmux)
                                                                      • dev (sed, awk, python, haskell, php, javascript, perl, c)
                                                                      • email (acme, mutt)
                                                                      • image viewing and manipulation (feh, ImageMagick)
                                                                      • media (mplayer)
                                                                      • office (LibreOffice)
                                                                      • printing (lpr)
                                                                      • reading (more, mupdf, Calibre for ebooks)
                                                                      • videos (mplayer, youtube-dl, Chrome)
                                                                      • web browsing (w3m, Chrome)
                                                                      • writing (ed, acme, pandoc)

                                                                      On the whole, I find OpenBSD a perfectly usable desktop OS, and it’s certainly my preference. I occasionally have to use other OSs due to work requirements, or where there’s no available software (see comments above about Skype).

                                                                      1. 1

                                                                        My initial searches proved worthless, I’ll try to set this up after work, thanks a ton! I’m mathuin from bsd.network by the way, so this is really on point.

                                                                        1. 2

                                                                          I had been thinking to write some articles about plan9 and Acme. Your question on https://bsd.network prompted me get started. Let me know how it goes.

                                                                          1. 1

                                                                            I was wondering why you chose msmtp over the openbsd smtp?

                                                                            1. 1

                                                                              A couple of reasons:

                                                                              1. I already have it configured for multiple services I send through (personal domain, gmail),
                                                                              2. msmtp is more of a client that functions like sendmail(8) where smtpd(8) would need to be configured as relay host,
                                                                              3. It also runs in user space.
                                                                        1. 2

                                                                          I’ve played with using a program called amail to read mail in acme, but can’t recommend it. It’s written in a literate style, which means the actual code produced is just a giant pile of slinkys and impossible to read by itself.

                                                                          It’s on my long list of projects to write a maildir adapter for Mail or add maildir support directly so that I can continue to use my mbsync+msmtp workflow.

                                                                          1. 2

                                                                            This might be overkill, but setup dovecot(1) on your local system. Let it serve email to Acme via IMAP. Your current workflow should continue to work.

                                                                            I used to do something similar when I used gnus on Emacs. The only difference is I used the doveadm(1) sync command to keep the local in sync with the primary server. For dovecot-to-dovecot syncing, doveadm(1) is the way to go.

                                                                          1. 1

                                                                            I cringe when I see practice of putting clear passwords in any text file, especially the dreaded .netrc.

                                                                            Supposedly secstore(1) could help with that, but I have never ventured further in those. Can somebody say anything about the security aspect of these programs in plan9port?

                                                                            1. 1

                                                                              With msmtp(1) you should use the passwordeval option which will evaluate an expression and use whatever is returned on stdout as the password:

                                                                              gpg2 --no-tty -q -d ~/.msmtp-password.gpg

                                                                              Install pinentry-gtk2 and you’ll get a nice dialog box.

                                                                              I intended to mention the passwordeval option, but the writing went into the wee hours and it was lost. :D I’ve updated the $HOME/.msmtprc example with a note referencing it.

                                                                              As for secstore(1), that’s a backing store for factotum(4). I think you could use passwordeval with factotum(4).

                                                                              1. 1

                                                                                How does one set up factotum with secstore? Can I use it the same way I use pass? If I don’t explicitly use secstore will I have to set the secret everytime I start factotum?

                                                                                1. 1

                                                                                  iirc, yeah, you’ll get prompted. Well, may get prompted. I don’t think things like auth/fgui(1) got brought over.

                                                                            1. 4

                                                                              Rather than the video, I want to see the car’s telemetry stream just prior to the collision. Logs too, please.

                                                                              1. 12

                                                                                What’s the best way to use Mastodon? I appreciate its dedication to privacy, but the distributed nature of Mastodon confuses me. I feel like it’s the World of Warcraft server problem, where it’s impossible to find a server with all your friends on it without one server having everyone on it.

                                                                                1. 11

                                                                                  You don’t have to be on the same server, you can follow accounts from other instances.

                                                                                  1. 10

                                                                                    Many of us in the BSD world went with https://bsd.network/.

                                                                                    You might try the instances.social finder: https://instances.social/.

                                                                                    One of the things I like about Mastodon is I can join a server (or servers) that are more closely aligned to my interests. By monitoring the instance timeline I see all the posts. I don’t have to find a bunch of people to follow immediately. I can grow the list by following the folks I notice posting things I enjoy reading.

                                                                                    1. 2

                                                                                      What network do Haskellers use?

                                                                                      1. 4
                                                                                      2. 1

                                                                                        Yeah that’s one of the things I really dig about it. It’s a metacommunity. You find an instance that focuses on your interests, or create one of your own if that’s what floats your boat, and it becomes a microcosm unto itself, but you all still share the global timeline.

                                                                                      3. 6

                                                                                        Replace instance with server and mastodon with e-mail. Then all these explanations become less confusing. Unless your server’s admin or your peer’s admin blocks your messages, you can write messages to every peer on every other server as you see fit.

                                                                                        Does that make sense to you?

                                                                                        1. 4

                                                                                          You don’t need to find a server with everyone in it since federation allows you to follow people on other servers. I do recommend to simply find a community you enjoy and use that as home-base.

                                                                                          1. 1

                                                                                            With sites like joinmastodon.org or instances.social, I haven’t experienced this to be too much of a problem. Yes, it takes a while, but you can always delete an account and create a new one on another instance.

                                                                                            To me, the real problem seems to be the difficulty to find interesting accounts to follow, and create a informative, fun and interesting stream. There are a lot of inactive accounts, regardless of the server (mine is one of these), and some active ones, but I can’t really re-create the same level of experience as on twitter, reddit or image boards, for example, even though I have stopped using all of these by now.