Threads for viraptor

  1. 7

    I’m somewhat skeptical of this, primarily because they want to base it off ActivityPub, a standard that is notoriously badly implemented (the status quo of most implementations is making it work with Mastodon and themselves). Also, I don’t see why software development needs to use a protocol meant for social media, especially when many developers complain about those github features.

    1.  

      This is a social network though. Not in a Facebook way, but it’s effectively a network of servers sharing steams of updates about projects. Just because we don’t like the social features in GH, doesn’t mean the protocol itself doesn’t represent the updates/issues/PR interactions well.

      1.  

        You can represent the updates/issues/PR’s in email just fine (has been done, multiple times even). You can represent them with git refs (also has been done, multiple times). You can represent these interactions in a multitude of ways that all work well. They of course have tradeoffs, but I don’t see any advantage in using ActivityPub for these interactions besides them being viewable on traditional decentralized social media services, and I don’t see any benefits for software development in that.

        1.  

          There’s two problems with that: 1. There’s no good interface for git-over-email, 2. It brings all the issues of email deliverability. I really don’t want to chase maintainers on other networks to ask them to check the spam folder, or debug why gmail rejects my domain today.

          1.  

            I think that any problem you can have with email can also happen using ActivityPub, only with taking more time to fix them as the ecosystem is so much younger.

            1.  

              1: Patchwork, sourcehut work just fine. Plenty of customizable email clients to work with it too. Zero problems with that. 2: Email deliverability is not a problem for git patches. Legitimately never had problems with it. Email is probably one of the most reliable methods of comunication FWIW, and you only encounter problems when you are sending large amounts of automated emails to large email silos (google, microsoft, etc).

              Besides, why are you focusing on email? You can still represent those interactions in a bunch of other ways (git, as I suggested for example).

              1.  

                My iPad mail client won’t let me send an email that patchwork or sourcehut will accept. I do a significant amount of open source work on my iPad.

                1.  

                  I do agree that the fact that sourcehut doesn’t accept mails with HTML content is a drawback. I would say that a mail client that won’t let you send an email without HTML content isn’t good too. But email clients are relatively easy to change, and maybe you should look for a better one.

                  1.  

                    With all due respect, I like Apple Mail and it works fine on literally every other service I use. Blocking access from the most used mail client on the most popular devices in the world is not a sustainable approach. My mail client is perfectly capable for everything I need to do. The service not accepting it is the one that is broken. Apple Mail sends a plaintext part anyways, so there’s no reason that the service can’t strip the HTML component from the email and forward it on.

                    I know how to send plain text email, but a lot of the time I don’t want to think about how my email is sent. I want to think about the email I’m sending or the contribution I am making. The main problem with a lot of these solutions is that they do not consider the user experience for users that aren’t computing gods. Hell I’m considered a computing god but I don’t want to have to be in galaxy brain mode 9001% of the time of the day. You really have to meet people where they are and then move forward with that. This is the dark side of mail user agent diversity: the fact that some mail user agents make decisions that you may think are dumb. Thinking the mail user agent is dumb and refusing to support it only says things like “we don’t want you to participate”. This prevents communities from growing.

                    Believe what you want though.

                    1.  

                      Apple Mail sends a plaintext part anyways, so there’s no reason that the service can’t strip the HTML component from the email and forward it on.

                      It cannot. Stripping the HTML part breaks DKIM, and since usually at least one of SPF and DKIM must be valid, and SPF is invalid for mailing-list forwarded emails, the email as-is cannot be forwarded. You could resend the contents from some other address, but that is ugly and breaks patch authoring, so you’d need special casing for that. It’s honestly just easier to reject HTML emails.

                    2.  

                      Sometimes it’s important to meet users where they are, not expect them to change in order to have a successful collaboration. Perhaps the workflow isn’t the best one if you have to ask users to change their email client, or even worse their email provider (as you did in another comment).

                      1.  

                        Sometimes it’s useless to try working with people that are not willing to change. For example, the same argument could be made for making my website available over Gopher, because somebody’s ancient computer has no modern browser. But is it really useful for me to make extra effort on making my website available to them, when it would constrain me on what I could do without special casing different protocols? Sometimes, it is time for the user to change.

                        1.  

                          Perhaps it’s those who are still trying to make email workflow come back that should change? They seem to be unwilling to change, so I guess we shouldn’t try and convince them? Likely their niche will continue to be small without some huge improvements in the user experience.

                  2.  

                    I’m regularly getting git patches in my spam. And that’s on top of general deliverability issues. Also recently I’ve witnessed a group of very senior devs spending ~2 weeks resolving various issues to get the email workflow working correctly. I’m glad it works for you, but it’s not a universal experience.

                    I concentrated on email, because you mentioned it as an example. Git refs themselves don’t solve the distribution issues. You need to accept the pr itself or a notification about the pr somehow. In theory you could accept unauthenticated pushes of branches, but that comes with its own issues.

                    1.  

                      What I read is that you have problems with your email provider, not with git workflow on email. And while yes, a bunch of big providers are bad with it, it doesn’t mean you can’t move to a good one. From the workflow side I’m not sure where the snags are, of course there is a need for a transition period between workflows, but I’d say it wouldn’t be much different when moving in the opposite direction (from email to web PRs).

                      As for git, it need not be (traditional) branches. Git can handle arbitrary refs, and you can design your PR’s, issues, etc. to sit under separate ref namespaces, and only put restrictions on how those refs are managed (e.g. only forward pushes for unauthenticated users). You can design your files to be usable with simple to automate union merges (for examples, see git-annex). There are ways to extend what already exists to fairly extensive systems.

                      1.  

                        Most people aren’t moving from email to web PRs. They’ve always used web and never learned an email flow. So to switch to email will require mass retraining.

                        1.  

                          And in 2008 most users haven’t heard of web PRs and only knew email. The retraining already happened once, it can happen again.

                          1.  

                            Actually it didn’t, as most of the users who are working on web PRs started directly on the web. They weren’t collaborating on software in 2008. I think you make some compelling arguments around the reliability of email delivery versus web federation, but I disagree that from a user perspective the email workflow is the right one to use.

          1. 23

            This is an interesting issue, and I think there’s way more to think about than what’s mentioned in the article.

            If we’re purely talking about Copilot, I’m not really a fan, but that’s a whole separate other issue. It’s simply hard to abandon GitHub because viable alternatives are few and far between and every single one comes with its own set of trade-offs.

            1. GitHub has a critical mass of open source projects. Being able to use a single account to contribute to a large majority of open source projects is a nice thing. Encouraging people to contribute is a good thing. If I want to contribute to something hosted on someone’s custom Gitea instance (as an example), I’ll think very hard if it’s worth it before making an account.
            2. Tooling. So much tooling is only really possible because they primarily focus on a single platform (GitHub).
            3. Features. Issues, pull requests, project management, releases, etc. Some of these are done by other platforms as well, but few support the wide variety of features that GitHub does.
            4. I’m not a fan of the alternatives.
            • Gitea is my general recommendation for self-hosting - it’s fairly minimal, has great community support, and has enough integrations for most people/companies. Unfortunately, it requires a new account for every developer who wants to contribute to a project on that specific gitea instance.
            • Gitlab is probably the closest in terms of features, but it’s a very large piece of software requiring a comparatively large machine and some very specific setup (at least if you self host).
            • Bitbucket is decent, but very Atlassian-focused and definitely more interested in the business market than open source.
            • SourceHut is a solid piece of software, but it uses a workflow many developers are not familiar with and I’m uncomfortable using it because of how vitriolic the owner has been in the past (both on his blog and when I’ve tried to contribute to his software).
            1. 20

              Gitea is actively working on ActivityPub federation which should make it much easier for people to contribute to random projects they see without having to specifically make an account on a new server.

              1. 15

                And email works just fine for decentralization on projects not looking for other social features like stars—though the workflow is alien to those who have only experienced the merge request flow.

                1. 4

                  Do you get nice 3-way diffs when you review PRs via email?

                  1. 6

                    Email is a protocol. Your client can produce 3 way diffs if it likes, same as it could if the patch came any other way.

                    1. 3

                      Fair enough. Are there any email clients that do this?

                      1. 3

                        The git book has a section on their email commands git git-format-patch generates a patch suitable for sending in plaintext email.

                        Any command line mail client can be used to pipe those to your difftool of choice; building it as a specific feature doesn’t really make sense because command line tools expect you to use pipes to compose features.

                        1.  

                          Probably not directly in the email client? But as an Emacs user, I feel like it would be pretty trivial to go from mu4e or Gnus to ediff-merge-buffers. If you’re using some graphical diff tool, I can’t imagine that it would be hard to get a nice 3-way diff using a patch file.

                          1. 2

                            Sourcehut is working on being one. I bet someone (me, eventually, at least) will make a bridge to the AP stuff too so that the ones that speak AP but not email can be used by email

                            1. 5

                              Going from “you can do this decentralized” to “well, to get the features you really want there, you’ll need this centralized service’s implementation” in the space of like two comments is impressive.

                              1. 3

                                Neither of the things I mentioned (sourcehut, a self hosted free software, and AP, a decentralized social network protocol) are centralized services

                                1. 4

                                  The fact that people can self-host sourcehut does not make real-world typical use of sourcehut be decentralized. If — and this is a big if — sourcehut were to take off in popularity, for example, it would not be sourcehut the self-host-able piece of software taking off, it would be a particular instance of sourcehut taking off, and thus becoming the centralized bottleneck all over again.

                    2. 5

                      You don’t really need/want federation for that, just SSO.

                      1. 4

                        And it will gain maybe a few extra users as a result, but never achieve critical mass the way centralized services have, because there are no technological solutions to the social forces that drive centralization.

                        1. 1

                          This is extremely appealing, I had no idea.

                          1. 1

                            While it would be great to make dev work easier, it unfortunately doesn’t solve search/discovery. Unfortunately gh is still a great place to search for open source projects. I don’t know if AP can address that part, but I got someone will tackle this next.

                            1. 3

                              But not without flaws. I dislike that I can’t search for an exact string

                              1. 2

                                Pretty sure Google indexes GitHub and also all other places, so searches there work even better IME :)

                            2. 7

                              Well that’s the point. The call is to give up some of those things, a sacrifice in protest of GitHub’s abusive behaviours.

                              Note I said ‘some of’ because honestly, GitHub’s PR interface and some of its other features are…less than great.

                              1. 3

                                Unfortunately, it requires a new account for every developer who wants to contribute to a project on that specific gitea instance.

                                This really kills gitea, for now. The federation will revive it ♥

                              1. 1

                                While the comparison is interesting, the performance profile makes me think that they’re querying raw data. Without seeing an the details I’m speculating, but investing in scheduled day/month rollups could give massive performance boost compared to improving the query execution time. Just a bit of “the annual report probably shouldn’t be so much slower than the monthly one” feeling.

                                1. 43

                                  Tabs have the problem that you can’t determine the width of a line, which makes auto-formatted code look weird when viewed with a different width. And configuring them everywhere (editor, terminal emulator, various web services) to display as a reasonable number of spaces is tedious and often impossible.

                                  1. 23

                                    I agree with you, tabs introduce issues up and down the pipeline. On GitHub alone:

                                    • diffing
                                    • are settings are per person or per repo
                                    • yaml or python, where whitespace is significant
                                    • what if you want things to line up, like comments, or a series of statements?
                                    • combinations of the above interacting

                                    If you’re turning this into, say, epub or pdf, would you expect readers and viewer apps to be able to adjust this?

                                    I fixed up some old code this week, in a book; tabs were mostly 8 spaces, but, well, varied chapter by chapter. Instead of leaving ambiguity, mystery, puzzling, and headaches for future editors and readers to trip over, I made them spaces instead.

                                    1. 8

                                      I don’t get the point about yaml and python. You indent with tabs, one tab per level, that’s it. What problems do you see?

                                      1. 4

                                        In the Python REPL, tabs look ugly. The first one is 4 columns (because 4 columns are taken up by the “>>> “ prompt), the rest are 8 columns. So you end up with this:

                                        >>> for n in range(20):
                                        ...     if n % 2 == 1:
                                        ...             print(n*n)
                                        
                                        1. 9

                                          When I’m in the Python REPL, I only ever use a single space. Saves so much mashing the spacebar and maintaining readability is never an issue as I’m likely just doing some debugging:

                                          >>> for n in range(20):
                                          ...  if n % 2 == 1:
                                          ...   print(n*n)
                                          
                                          1. 3

                                            True, but this shows that tabs don’t work well everywhere. Spaces do.

                                            1. 1

                                              Unless you use a proportional font.

                                              1. 2

                                                Even with a proportional font, all spaces have the same width.

                                        2. 3
                                          def a():
                                          	x
                                                  y
                                          

                                          The two lines look the same, but they’re not to the python interpreter, even though you could use just spaces or just tabs.

                                          1. 17

                                            Don’t mix tabs and spaces for indentation, especially not for a language where indentation matters. Your code snippet does not work in Python 3:

                                            TabError: inconsistent use of tabs and spaces in indentation

                                            1. 1

                                              That was my point.

                                              1. 3

                                                Your point is don’t mix tabs and spaces? Nobody proposed that. The comment you responded to literally states:

                                                You indent with tabs, one tab per level, that’s it.

                                                Or is your point don’t use tabs because if you mix in spaces it doesn’t work?
                                                Then my answer is don’t use spaces, because if you mix in tabs it doesn’t work.

                                        3. 8

                                          what if you want things to line up, like comments, or a series of statements?

                                          https://nickgravgaard.com/elastic-tabstops/

                                          1. 2

                                            I appreciate that this is still surfaced, and absolutely adore it. I’d have been swayed by “tabs for indenting, spaces for alignment, for the sake of accessibility” if not for Lisp, which will typically includes indents of a regular tab-width off of an existing (arbitrary) alignment, such that indentation levels don’t consistently align with multiples of any regular tab-stops (eg. the spaces preceeding indention level 3 might vary from block to block depending on the context, and could even be at an odd offset). Elastic tab-stops seem like the only approach that could cator to this quirk, though I haven’t tried the demo with the context in mind.

                                            I also lament the lack of traction in implementations for Emacs, though it’s heartwarming to see those implementations that are featured. Widespread editor support may be the least of the hurdles to adoption, which feels like a third-party candidate in a two-party system. Does .editorconfig include elastics as an option? I’m not sure exactly how much work adding that option would entail, but that might be a great way to contribute to the preservation of this idea without the skills necessary to actually impliment support in an editor.

                                          2. 9

                                            what if you want things to line up

                                            Easy. Don’t.

                                            If you want to talk about diffing issues, then look at the diffs around half the Haskell community as a new value being longer requires a whole block to shift and either a bunch of manual manipulations or having to run a tool to parse and set your code just because you felt like things had to line up.

                                            1. 3

                                              what if you want things to line up, like comments, or a series of statements?

                                              Then you put spaces after your tabs. https://intellindent.info/seriously/

                                            2. 2

                                              I use tabs and autoformatters. I don’t think my code looks weird with any width between 2 and 8. What kind of weirdness do you refer to? About configuring, most developers have a dotfiles repo and manicure their setup there, why would setting a tabwidth there be more tedious than what most people do already anyway?

                                              1. 5

                                                Let’s say that you have the maximum line length set to 24 columns (just to make the example clear). You write code like this:

                                                if True:
                                                    print("abcdefg")
                                                    if False:
                                                        print("xyz")
                                                

                                                With the tab width set to 4 columns, your autoformatter will leave all lines alone. However, if someone has the tab width set to 8, the fourth line will overreach the limit. If they’re also using the same formatter, it will break up the fourth line. Then you’ll wonder why it’s broken up, even though it’s the same length as the second line, which wasn’t broken up. And your formatter might join it up again, which will create endless conflicts.

                                                1. 4

                                                  Optimal line reading length is around 60 chars per line, not 60 characters including all leading whitespace. Setting bounds based on character from column 0 is arbitrary, and the only goal should be not too many characters per line starting at the first non-whitespace character (and even this is within reason because let’s be real, long strings like URLs never fit).

                                                  1. 3

                                                    Setting bounds based on character from column 0 is arbitrary

                                                    Not if you print the code in mediums of limited width. A4 paper, PDF, and web pages viewed from a phone come to mind. For many of those a hard limit of 80 columns from the start is a pretty good start.

                                                    1. 1

                                                      That is a fairer point as I was referring to looking at code in an editor–reason being that we’ve been discussing mediums where users can easily adjust the tab-width which is more on topic than static mediums. Web pages are the weird one where it should technically be just as easy to configure the width, but browsers have made it obnoxious or impossible to set our preferred width instead of 8 (I commented about it in the Prettier thread as people seem so riled up about it looking bad on GitHub instead of seeing the bigger picture that GitHub isn’t where all source code lives).

                                                      1. 5

                                                        Note that my favourite editor is the left half of my 13-inch laptop screen…

                                                  2. 1

                                                    I never really understood the need for a maximum length when writing software. Sure it makes sense to consider maximum line length when writing for a book or a PDF, but then it’s not about programming but about typesetting; you also don’t care about the programming font unless you’re writing to publish.

                                                    If you really want to set a maximum line length, I’d recommend to have a maximum line length excluding the indentation, so that when you have to indent a block deeper or shallower, you don’t need to recalculate where the code breaks.

                                                    But really don’t use a formatter to force both upper and lower limits to line lengths; sometimes it makes sense to use long lines and sometimes it makes sense to use short lines.

                                                    1. 5

                                                      Maximum line length makes sense because code is read more often than it’s written. In terms of readability, you’re probably right about maximum line length excluding indentation. But on the other hand, one of the benefits of maximum line length is being able to put multiple text buffers side-by-side on a normal monitor. Perhaps the very smart thing would be a maximum of 60 chars, excluding indentation, with a max of 110 chars including indentation. Of course, you have to treat tabs as a fixed, known width to do that.

                                                      1. 3

                                                        I never really understood the need for a maximum length when writing software.

                                                        There are a bunch of editing tasks for which I want to view 2 or 3 different pieces of code side by side. I can only fit so many editors side by side at a font size that’s still readable.

                                                        • looking at caller and callee
                                                        • 3 way diff views
                                                        • old and new versions of the same code
                                                        1. 3

                                                          Personally, I hate manually breaking up lines when they get too long to read, so that’s what an autoformatter is for. Obviously the maximum readable length differs, but to do it automatically, one has to pick some arbitrary limit.

                                                          1. 1

                                                            Sure, but there’s a difference between breaking lines when they get too long, and putting them together again when they are too short.

                                                            When I use black to format Python code, it always annoys me that I cannot make lines shorter than the hard limit. I don’t really care that I can’t make them longer than some arbitrary limit. Sure, the limit is configurable, but it’s per-file, not per-line.

                                                            If the problem you have is “where should I split this 120-character one-liner that’s indented with 10 tabs”, then tabs aren’t your problem.

                                                  1. 22

                                                    3.3.2 Medium Fail2Ban daemon running as root

                                                    The fail2ban daemon is running with root privileges. According to its documentation, this application supports running as a non-root user, which is preferable. fail2ban handles data from external sources (log text), which makes it part of the external attack surface. We recommend that all services be running as separate non-root users, with capabilities constrained to the minimum required for operation.

                                                    Disclaimer, I’ve been a fierce critic of fail2ban, for SSH I’m a huge advocate of just firewalling sshd behind spiped:

                                                    • This drastically increase protection against zero-days on SSH since an attacker would need a zero-day on spiped and sshd
                                                    • spiped is much more lightweight than wireguard (or worse openvpn), it can be run as a non-priviledged-user, and fully-systemd-hardened
                                                    • It reduces to almost zero the attack surface on mis-configured sshd. (because the attacker needs to go through spiped before hitting sshd, and spiped is basically a “256bit-combination port knocker”)
                                                    • It’s written by Colin Percival, “the tarsnap guy”, cryptographer, former FreeBSD Security officer. So I trust the security of it. (and also how sound the crypto is)

                                                    I’m happy to see a security audit recommending against fail2ban. Because fail2ban must to run as root, as it updates the firewall rules. (which is a reason why I do not like fail2ban, because it makes the firewall rules non-auditable) Also fail2ban had some vulnerability in the past where injecting the right strings in the logs could lead to being able to run unintended code. (Not RCE, but almost)

                                                    I’m a huge fail2ban critic, but I understand why some people want to run it for web servers protection, and stuff like this. (I would still not run it personally, as it is a huge pile of spaghetti python code) But if you do run it, I agree with the security auditor: run it as a user. You can use haproxy runtime-maps to maintain a list of banned IPs for a service behind haproxy for example, and updating the map can be done as a non-root-user with access to the unix socket.

                                                    1. 7

                                                      I agree with everything, however… “Because fail2ban must to run as root, as it updates the firewall rules.” is not really correct. You can have proper privilege separation in that kind of app by immediately splitting into a) a log reader, b) a parser with no privileges other than running, c) two privileged scripts which exit on arguments that are not IP addresses and add/remove them from ipset. That’s a setup that’s easy to verify and almost equivalent to no-root.

                                                      I wrote something about various way of improving on SSH security https://blog.viraptor.info/post/who-cares-about-security-by-obscurity

                                                      1. 5

                                                        You can have proper privilege separation in that kind of app by immediately splitting into a) a log reader, b) a parser with no privileges other than running, c) two privileged scripts which exit on arguments that are not IP addresses and add/remove them from ipset. That’s a setup that’s easy to verify and almost equivalent to no-root.

                                                        You are right. You can use ipset or even better switch to nftables and uses regular nftables sets. However, last time I checked, this is not the default configuration of fail2ban, and very little official documentation on this topic. IMHO, this shows how much the fail2ban team prioritise security-theater over real security.

                                                      2. 4

                                                        The last paragraph kind of contradicts the one before it. The last one explains clearly why fail2ban doesn’t need to run as root, because it doesn’t necessarily need to update firewall rules. I e.g. use it to update a text file with IP addresses that Apache uses as an allowlist.

                                                        1. 2

                                                          The last paragraph kind of contradicts the one before it.

                                                          Not really, but you’re right, it’s confusing. In the second-last paragraph, I made the assumption that mullvad uses fail2ban to block IPs either on SSH or on OpenVPN/Wireguard. In this context, your only option (AFAIK) is firewall blocking.

                                                          In the last paragraph, I am talking about web filtering. Yes you’re right. You can use apache/haproxy allow list files, but I don’t think it is Mullvad’s (and 99% of people’s) use-case.

                                                        2. 4

                                                          Disclaimer, I’ve been a fierce critic of fail2ban

                                                          piggybacking to say the same! here’s my little (mildly inflammatory) writing on the subject: https://j3s.sh/thought/fail2ban-sux.html

                                                          1. 2

                                                            I’ve been using sshguard for years, but not thought too hard about it - how does it compare to fail2ban?

                                                            1. 3

                                                              I have not deeply looked into sshguard. But looking at it briefly, it looks better (as it uses ipset which can be used for privileges separation). However, I am bothered by the whole concept of blacklisting IPs. It doesn’t protect against zero-days, because if there is a zero-day announced bots will exploit it before getting blacklisted after X-retries. If you’re trying to prevent password brute-forcing, public key authentication will prevent this, without the risk of getting blacklisted from your own server.

                                                              I think the article link by @j3s in their comment summarise my opinion very well. And a lot of it is applicable to sshguard.

                                                              Anyway, to answer your original question :P, yes sshguard is an improvement, but I still think the solution designed is wrong. I would still recommend spiped instead.

                                                            2. 1

                                                              Thanks for mentioning this, I’d never heard of it. I don’t like fail2ban so I’ve always been hiding behind wireguard/tailscale etc on top of disallowing passwords. I can really see a use for spiped in quite a few areas of my job

                                                            1. 2

                                                              It’s there a reason I should expect Defaultable / Applicative (Map Int) to do what it does? I’m looking at the innerJoin and think that without knowing the implementation I don’t know if it’s going to join or zip or do some sort of unique union or… Is the naming here relying on some convention? (just realised my reaction is opposite to dfa - I see what the SQL does, but no idea what to expect from defaultable)

                                                              1. 1

                                                                For the behavior you are interested in, Defaultable’s Applicative instance delegates to the underlying Apply instance, so it comes down to whether or not you trust the underlying Map-like type to have implemented Apply sensibly

                                                                I also don’t know how many law-abiding Apply instances that you can write for a Map-like type. I can only think of one at the moment but there could be more.

                                                              1. 11

                                                                So this is not a dig at them, but what are they trying to achieve? Mozilla overall has recently issues with both user retention and funding. I’m not sure i understand why they’re pushing for an entirely new thing (which I assume cost them some money to acquire k9) rather than improving the core product situation?

                                                                Guesses: a) those projects so separate in funding that it’s not an issue at all, or b) they’re thinking of an enterprise client with a paid version?

                                                                1. 9

                                                                  These things are indeed separate in funding. Thunderbird is under a whole different entity than, say, Firefox

                                                                  1. 2

                                                                    Aren’t they both funded by the Mozilla Foundation? How are they separate?

                                                                      1. 7

                                                                        @caleb wrote:

                                                                        Aren’t they both funded by the Mozilla Foundation? How are they separate?

                                                                        Your link’s first sentence:

                                                                        As of today, the Thunderbird project will be operating from a new wholly owned subsidiary of the Mozilla Foundation […]

                                                                        I’m confused…

                                                                        1. 2

                                                                          Seems pretty clear by the usage of the word “subsidiary”

                                                                          Subsidiaries are separate, distinct legal entities for the purposes of taxation, regulation and liability. For this reason, they differ from divisions, which are businesses fully integrated within the main company, and not legally or otherwise distinct from it.[8] In other words, a subsidiary can sue and be sued separately from its parent and its obligations will not normally be the obligations of its parent.

                                                                          The parent and the subsidiary do not necessarily have to operate in the same locations or operate the same businesses. Not only is it possible that they could conceivably be competitors in the marketplace, but such arrangements happen frequently at the end of a hostile takeover or voluntary merger. Also, because a parent company and a subsidiary are separate entities, it is entirely possible for one of them to be involved in legal proceedings, bankruptcy, tax delinquency, indictment or under investigation while the other is not.

                                                                  2. 5

                                                                    They’re going to need to work on a lot of things, including a lot of stability improvements as well as better/more standard support for policies and autoconfig/SSO for Thunderbird to really be useful in enterprise.

                                                                    Frankly, Thunderbird is the only real desktop app that I know of that competes with Outlook, and it’s kind of terrible… there really is a market here, and I don’t think that working on android client is what they need

                                                                    1. 2

                                                                      Gnome Evolution works better than Thunderbird in an enterprise. For thunderbird IIUC you need a paid add-on to be able to connect to Office365 Outlook mailboxes (in the past there used to be an EWS plugin that worked with onprem Outlook, but doesn’t seem to work with O365), whereas Evolution supports OAuth out of the box.

                                                                      1. 4

                                                                        Thunderbird supports IMAP/SMTP Oauth2 out of the box, which O365 has if your org has it enabled. What it lacks (and what Evolution has in advantage) is Exchange support.

                                                                        If your org has IMAP/SMTP/activesync enabled then you can even do calendaring and global address completion using TbSync, which I rely heavily on for CalDAV / CardDAV support anyway (though I hear Thunderbird is looking to make these two an OOB experience as well)

                                                                    2. 3

                                                                      I can’t say for certain, but I think maybe they’re looking to provide a similar desktop experience on mobile. I use Firefox and Thunderbird for work, and it is a curious thing to note that Thunderbird did not get any kind of Android version. Firefox already released base and Focus as Android applications, so it would be cool to see a Thunderbird exist in the (F)OSS Android ecosystem.

                                                                      I have been a K-9 user for a number of years but I do think it’s UI could use a bit of an update. I have been using it since Android 5.0 and it has basically had the same interface since the initial Material release. This could be an exciting time for K-9 to get a new coat of paint. I will love K-9 mail even if this doesn’t pan out well.

                                                                      1. 4

                                                                        K-9 mail is almost perfect the way it currently is on Android (at least when it comes to connecting to personal mailboxes). I can’t speak about how well it’d work in an enterprise because I keep work stuff off my phone on purpose.

                                                                        1. 4

                                                                          The biggest functional shortcoming with K-9 is no support for OAuth2 logins, such as GMail and Office365. You can currently use K-9 Mail with an app-specific password in GMail, but Google will be taking that ability away soon. I also have some minor issues with notifications; my home IMAP server supports IDLE, but I still often see notifications being significantly delayed.

                                                                          In terms of interface, there was a Material cleanup a while ago, and the settings got less complicated and cluttered, so it’s very usable and reasonably presentable. But it does look increasingly out of date (though that’s admittedly both subjective and an endless treadmill).

                                                                          1. 3

                                                                            oauth2 was merged a few days ago https://github.com/thundernest/k-9/pull/6082

                                                                            1. 1

                                                                              Ah, yeah, I saw elsewhere that it’s the only priority for the next release.

                                                                    1. 9

                                                                      Is there any proof that the telemetry data is NOT put to good use to improve VSCode ?

                                                                      1. 28

                                                                        I think there’s a tinge of paranoia that runs through the anti-telemetry movement (for lack of a better term; I’m not sure it’s really a movement). Product usage telemetry can be incredibly valuable to teams trying to decide how best to allocate their resources. It isn’t inherently abusive or malignant. VSCode is a fantastic tool that I get to use for free to make myself money. If they say they need telemetry to help make it better than I am okay with that.

                                                                        1. 9

                                                                          I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc.

                                                                          But MS shot itself in the foot by making telemetry mysterious and impossible to inspect or disable. They made people allergic to the very idea.

                                                                          1. 12

                                                                            I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc

                                                                            It’s a bit uncharitable to read “they blocked my crash reporting service” as “they must have some kind of misunderstanding about what telemetry means” (if that’s what you’re implying when you say you’re disappointed but not surprised that people block them).

                                                                            I know exactly what services like rollbar do and what kinds of info they transmit, and I choose to block them anyways.

                                                                            One of the big takeaways from the Snowden (I think?) disclosures was that the NSA found crash reporting data to be an invaluable source of information they could then use to help them penetrate a target. Anybody who’s concerned about nation-state (or other privledged-network-position actor) surveillance, or the ability of law enforcement or malicious actors impersonating law enforcement to get these services to divulge this data (now or at any point in the foreseeable future), might well want to consider blocking these services for perfectly informed reasons.

                                                                            1. 5

                                                                              I believe that’s actually correct - people in general don’t understand what different types of telemetry do. A few tech people making informed choices don’t contradict this. You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others. You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                                                                              So no, I don’t believe the general public understands how many things are lumped into the telemetry idea and they don’t have tools to make informed decisions.

                                                                              Side-topic: MS security actually does aggregate analysis of crash reports to spot exploit attempts in the wild. So how that works out for security is a complex case… I lean towards report early, fix early.

                                                                              1. 7

                                                                                You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others.

                                                                                I’m not following this argument. People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry – which includes the possibility of data related to their machines being used against them.

                                                                                Adblocker developers (correctly!) recognize that datadog/rollbar/etc are vectors for some of those harms. The not every person who installs an adblocker could tell you which specific harm rollbar.com corresponds to vs which adclick.track corresponds to, does not imply that if properly informed about what rollbar.com tracks and how that data could be exploited, they wouldn’t still choose to block it. After all, they’re users who are voluntarily installing software to prevent just such harms. I think a number of these people understand just fine that some of that telemetry data is “my computer is vulnerable and this data could help someone harm it” and not just “Bob has a diaper fetish” stuff.

                                                                                It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                                                                                You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                                                                                That some incorrect people are vocal does not tell us anything, really.

                                                                                1. 3

                                                                                  It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                                                                                  Counterpoint: Every time my app crashed, people not only gave me all data i asked for, they just left me with a remote session to their desktop. At some point I switched to rollbar and they were happy when I emailed them about an update before they got around to reporting the issue to me. So yeah, based on my experience, people are very happy to give crash data in exchange for better support. In a small pool of customers, not a single one even asked about it (and due to the industry they had to sign a separate agreement about it).

                                                                                  That some incorrect people are vocal does not tell us anything, really.

                                                                                  The bad part is not that they’re vocal, but that they cannot learn the truth themselves and even if I wanted to tell them it’s not true - I cannot be 100% sure, because a lot of current telemetry is opaque.

                                                                                  1. 3

                                                                                    I don’t know how many customers you have or how directly they come in contact with you, but I would hazard a guess that your business is not a faceless megacorp like Microsoft. This makes all the difference; I would much more readily trust a human I can talk to directly than some automated code that sends god-knows-what information off to who-knows-where, with the possibility of it being “monetized” to earn something extra on the side.

                                                                                  2. 3

                                                                                    People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry

                                                                                    ooof that’s reading way too much into it. I just don’t want to watch ads. And as for telemetry, I just don’t want the bloat it introduces.

                                                                            2. 7

                                                                              The onus is not on users to justify disabling telemetry. The ones receiving and using the data must be able to make a case for enabling it.

                                                                              Obviously, you need to be GDRP-compliant too; that should go without saying, but it’s such a low bar.

                                                                              Copy-pasting my thoughts on why opt-out telemetry is unethical:

                                                                              Being enrolled in a study should require prior informed consent. Terms of the data collection, including what data can be collected and how that data will be used, must be presented to all participants in language they can understand. Only then can they provide informed consent.

                                                                              Harvesting data without permission is just exploitation. Software improvements and user engagement are not more important than basic respect for user agency.

                                                                              Moreover, not everyone is like you. People who do have reason to care about data collection should not have their critical needs outweighed for the mere convenience of the majority. This type of rhetoric is often used to dismiss accessibility concerns, which is why we have to turn to legislation.

                                                                              If you make all your decisions based on telemetry, your decisions will be biased towards the type of user who forgot to turn it off.

                                                                            3. 9

                                                                              This presumes that both:

                                                                              a) using data obtained from monitoring my actions to “improve VSCode” (Meaning what? Along what metrics is improvement defined? For whose benefit do these improvements exist? Mine, or the corporation’s KPIs? When these goals conflict, whose improvements will be given preference?) is something I consider a good use in any case

                                                                              b) that if this data is not being misused right now (along any definition of misuse) it will never in the future cross that line (however you choose to define it)

                                                                              1. 2

                                                                                Along what metrics is improvement defined?

                                                                                First step would be to get data about usage. If MS finds out a large number of VSCode users are often using the json formatter (just a example) i assume they will try to improve that : make it faster, add more options etc etc.

                                                                                Mine, or the corporation’s KPIs

                                                                                It’s an OSS project which is not commercialized in any way by the “corporation”. They are no comemrcial licenses to sell, with VSCode all they earn is goodwill.

                                                                                will never in the future cross that line

                                                                                Honest question, in what way do you think VSCode usage data be “missused” ?

                                                                                1. 12

                                                                                  i assume they will try to improve that : make it faster, add more options etc etc.

                                                                                  You assume. I assume that some day, now or in the future, some PM’s KPI will be “how do we increase conversion spend of VSCode customers on azure” or similar. I’ve been in too many meeting with goals just like that to imagine otherwise.

                                                                                  It’s an OSS project which is not commercialized in any way by the “corporation”

                                                                                  I promise you that the multibillion dollar corporation is not doing this out of the goodness of their heart. If it is not monetized now (doubtful – all those nudges towards azure integrations aren’t coincidental), it certainly will be at some point.

                                                                                  Honest question, in what way do you think VSCode usage data be “missused” ?

                                                                                  Well, first and most obviously, advertising. It does not take much of anything to connect me back to an ad network profile and start connecting my tools usage data to that profile – things like “uses AWS-related plugins” would be a decent signal to advertisers that I’m in the loop on an organization’s cloud-spend decisions, and ads targeted at me to influence those decisions would then make sense.

                                                                                  Beyond that, crash telemetry data is rich for exploitation uses, like I mentioned in another comment here. Even if you assume the NSA-or-local-gov-equivalent isn’t interested in you, J Random ransomware group is just successfully pretending to be a law enforcement agency with a subpoena away (which, as we discovered this year, most orgs are doing very little to prevent) from vscode-remote-instance crash data from servers people were SSH’d into. Paths recorded in backtraces tend to have usernames, server names, etc.

                                                                                  “This data collected about me is harmless” speaks more to a lack of imagination than to the safety of data about you or your organization’s equipment.

                                                                              2. 4

                                                                                That point is irrelevant, since it’s impossible to prove that microsoft is NOT misusing it now and that they will NOT misuse it in the future.

                                                                                1. 3

                                                                                  No, so should we blindly trust Microsoft with our data, or be cautious?

                                                                                1. 5

                                                                                  Why does nobody complain about how OpenSSL doesn’t follow the UNIX philosophy of “Do one thing well”?

                                                                                  1. 33

                                                                                    Probably because there’s already so many other things to complain about with openssl that it doesn’t make the top 5 cut.

                                                                                    1. 17

                                                                                      Because the “Unix philosophy” is incredibly vague and ex-post-facto rationalization. That, and I suspect cryptography operations would be hard to do properly like that.

                                                                                      1. 3

                                                                                        Does UNIX follow the UNIX philosophy?

                                                                                        I mean, ls has has 11 options and 4 of them deal with sorting. According to the UNIX philosophy sort should’ve been used for sorting. So “Do one thing well” doesn’t hold here. Likewise, other tenets are not followed too closely. For example, most of these sorting options were added later (“build afresh rather than complicate old programs” much?).

                                                                                        The first UNIX, actually, didn’t have sort so it can be understood why an option might’ve been added (only t at the time) and why it might’ve stayed (backwards compatibility). Addition of sort kinda follows the UNIX philosophy but addition of more sorting options to ls after sort was added goes completely contrary to it.

                                                                                        1. 3

                                                                                          Theoretically, yes: it seems that Bell Labs’ UNIX followed the UNIX philosophy, but BSD broke it.

                                                                                          Reference: http://harmful.cat-v.org/cat-v/

                                                                                        2. 3

                                                                                          Everyone’s still wondering if the right way to phrase it is that “it does too many things” or “it doesn’t do any of them well” ¯\_(ツ)_/¯

                                                                                          1. 2

                                                                                            Maybe because it’s not really a tool you’re expected to use beyond a crypto swiss army knife. I mean, it became a defacto certificate request generator, because people have it installed by default, but there are better tools for that. As a debug tool it is a “one thing well” tool. The one thing is “poke around encryption content / functions”.

                                                                                            Otherwise, what would be the point of extracting things like ans1parse, pkey, or others if they would be backed by the same library anymore. Would it change anything if you called openssl-asn1parse as a separate tool instead of openssl asn1parse?

                                                                                            1. 1

                                                                                              For the same reason no one complains about curl either?

                                                                                              1. 1

                                                                                                related, here’s a wget gui that looks similarly complex https://www.jensroesner.com/wgetgui/#screen

                                                                                            1. 1

                                                                                              Nearly impossible to detect on the running server. However, if you have something like a pihole looking for dns exfiltration attempts, this becomes much easier to detect. It does require multiple layers of protection though, I’ll give it that.

                                                                                              1. 2

                                                                                                Since I haven’t seen any mention of it tampering with the kernel or hooking actual syscalls (as opposed to userspace syscall wrappers), it sounds like its concealment mechanisms should be pretty simple to bypass using statically-linked executables? (A static busybox build, say.)

                                                                                                1. 1

                                                                                                  This was my take. LD_PRELOAD wouldn’t work in the statically linked context

                                                                                                2. 1

                                                                                                  Or if you’re running in AWS there’s also their guardduty alert which I hope would pick it up: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-ec2.html#backdoor-ec2-ccactivitybdns

                                                                                                  1. 1

                                                                                                    The grsecurity patchset includes a feature called Trusted Path Execution (TPE). It can integrate with the RTLD to completely mitigate LD_PRELOAD abuses. I’m working on implementing something similar in HardenedBSD this weekend. :-)

                                                                                                  1. 5

                                                                                                    What your code actually needs in terms of infrastructure should be inferred as you build your application, instead of you having to think upfront about what infrastructure piece is needed and how to wire it up.

                                                                                                    I am very, very doubtful that this is the right approach. Provisioning costs money. Shouldn’t that mean it’s better to be explicit than implicit about the underlying computing resources being used? If costs overrun the budget or you decide you need more capacity, you need to be able to go change things yourself, it needs to be easy to find, and it needs to be explicitly written out what is being provisioned and why. Furthermore how can you decide what kind of resources you need without dynamic information obtained from objective profiling and subjectively weighing tradeoffs? A procedural macro could only ever obtain static information.

                                                                                                    1. 3

                                                                                                      I’m very confused by the quote. I mean, the code may include a database query, but we need real world context to know if that can be a file backed shim, or a globally distributed cluster with sharding based on local law.

                                                                                                    1. 3

                                                                                                      I haven’t followed Swift’s development much lately, but I did watch one of the WWDC videos on some of the new concurrency stuff. I was fairly shocked to see just how many features they keep adding to the language, rather than building a core language and extending it with libraries; it’s not going to be long before the language is an unmanageable mess like C++ with feature tacked on top of feature.

                                                                                                      Swift has a decent type system, the features that are being added can easily be represented in the type system. for an example (which I can’t find again so the exact syntax will be wrong)

                                                                                                      for try await x = someCollection {
                                                                                                         ...
                                                                                                      } catch {
                                                                                                          ...
                                                                                                      }
                                                                                                      

                                                                                                      (though I have a feeling the actual syntax was even more horrible than that).

                                                                                                      It seems like Apple are jumping through incredible hoops just avoid the dreaded monad. Most of the examples on https://github.com/apple/swift-evolution/blob/main/proposals/0296-async-await.md are clearly monadic code, but Apple seems to have taken the same approach to language design as Google with Go and assumed developers are too stupid to learn anything beyond basic OOP and introductory FP (FP is just map and reduce right?)

                                                                                                      If they could bring themselves to actually learn from other parts of the programming language community, they’d find that they get to use abstractions like Applicative, which would allow automatic parallelisation with concise syntax - just see Facebook’s success with Haxl and their Sigma spam filtering system, where giving non-technical people access to these abstractions means they can write high performance spam filtering rules that concurrently access resources from dozens of other services, all in parallel.

                                                                                                      Stop adding crap to the language, provide the primitives that allow these features to be added as libraries. Language features are technical debt that you can never remove, libraries allow users to switch in new implementations, and allows ideas to compete. </rant>

                                                                                                      1. 5

                                                                                                        As you said Swift has a great type system and they’re still investing a lot into it. You can still write monadic code and there are well established patterns for mixing monadic code with structured concurrency.

                                                                                                        The problem with monadic code is that it was getting unwieldy. App developers had to deal with so many async components: off thread view rendering work, databases, network calls, all of which may depend on each other in complex ways then add the complexity of GUI framework patterns.

                                                                                                        People were making mistakes, forgetting to handle error cases, forgetting to call a callback, calling it multiple times, etc. The code would get very deeply nested due to callbacks calling callbacks within callbacks. Structured concurrency unravels all of that the same way structured programming unravelled spaghetti code riddled with goto’s.

                                                                                                        1. 2

                                                                                                          Having things be actual language features does have actual advantages as well, so while yes adding features all willy nilly can go wrong, it isn’t inherently bad.

                                                                                                          Many of the more frustrating things in C++ come from things being library rather than language based, and some of the language features are there to support things being implemented as libraries rather than actual language features.

                                                                                                          Pointing to other languages that have drastically different designs and saying because they do something in way X, every language can do things in way X, is somewhat disingenuous.

                                                                                                          As for concurrency, by making it a first class language feature there are safety and performance guarantees that cannot be made in library based systems. For example, pretty much all “automatic” parallelization systems fall over in any non-trivial case, but by adding that to the standard library it gets just as fixed as a language feature, only with no benefits to go with it.

                                                                                                          1. 1

                                                                                                            The reasoning is explained when they talk about “some kind of Futurable protocol”. To be honest I’m a bit lost when you’re saying something like applicative would help non-technical people. How would that work? Where do those concepts meet?

                                                                                                          1. 1

                                                                                                            There’s a few standard size tricks missing here. For example the apk index is left over, and the -dev packages are not removed after installing the ssl packages.

                                                                                                            1. 1

                                                                                                              Thanks, I’ll try to add these suggestions. Do you by chance have any pointers to Dockerfiles that do that? (I find it always easyer to steal^w learn from others…).

                                                                                                              1. 2

                                                                                                                There’s a lot of good examples at https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

                                                                                                                To verify your image and get a summary of things you can remove, check https://github.com/viraptor/cruftspy

                                                                                                            1. 3

                                                                                                              The article does a poor job of referencing the project itself, which is here. And here’s how to contribute so this wonderful project has a greater chance at succeeding.

                                                                                                              1. 2

                                                                                                                AsahiLina also live streams her GPU work at least from time to time, and posts about it on her twitter: https://twitter.com/LinaAsahi

                                                                                                                1. 2

                                                                                                                  So I totally expected it was Marcan42’s April fools joke alterego that kept going afterwards. But now with the amount of work that went into it, I’m actually not sure anymore?

                                                                                                                  1. 2

                                                                                                                    My guess is that it’s probably Alyssa’s anime-sona, but who knows. What’s life without a little mystery?

                                                                                                              1. 4

                                                                                                                Has anybody used tags with great effectiveness? I’ve never heard of this to be something ultimately super useful… I’m very curious to hear if anyone has used them as their primary organizational tool!

                                                                                                                1. 2

                                                                                                                  As my primary organizational tool? Not quite yet. I think I might like to do that some day. As an organizational tool in the toolbox? Sure. For instance, I use file tagging to organize a collection of etext and track readedness status. This requires some discipline on my part, but it’s worth it.

                                                                                                                  Right now I’m using tmsu as my tagging tool of choice. One thing that interests me about Supertag is how it treats a logical path as an intersection of tags.

                                                                                                                  1. 2

                                                                                                                    For a balance: I have looked at tags via FUSE for a loooong time. Typical tagging solutions looked too weak. Used RelFS, found it too limited. Wrote my own, ended up with a lot of weird but personally convenient setups based on indexing stuff into SQL databases, multiple versions of file tagging… guess what, I tried to use each of my tagging things and gave up and just use hierarchical categorisation. More classical-structured SQL-based tools see daily use, including reading Lobste.rs

                                                                                                                    «read/unread» tracking though? Sure, I have a column in my SQL table for grabbing web content streams.

                                                                                                                    1. 2

                                                                                                                      Not sure if great effectiveness, but I use tags for all scanned correspondence. Things are easier if my recent scan is tagged “bank”, “mortgage”, “rate change”, “(address)”. I’ve never had an issue with too many tags, so I slap anything useful on them.

                                                                                                                      1. 2

                                                                                                                        MacOS let’s you tag files with colours, which may optionally be named. I use that to keep count of whether I’ve watched ⚪️ downloaded films, and whether they are keepers for being good 🔵 or bad 🔴.

                                                                                                                        At an earlier job, we used colours to track the stages of preparation for documents.

                                                                                                                        1. 2

                                                                                                                          I have a thunderbird tag called “reply” and a filter which every ten minutes marks “reply” emails as unread. Works pretty well!

                                                                                                                        1. 1

                                                                                                                          This is something I’m been meaning to look at for a long time, but have procrastinated the accompanying media-cleanup. I imagine it must be possible to back it with zvol, and (less confidently) perhaps with a BTRFS subvol?

                                                                                                                          1. 3

                                                                                                                            From the docs it looks like you’re only storing the links on that filesystem. The actual data still lives where you would normally store it. The tags are backed by an sqlite database.

                                                                                                                            So in practice you could double-fuse and store the files on S3 and tag them through this system. Or triple-fuse and store that database on S3 transparently as well.

                                                                                                                          1. 1

                                                                                                                            I love systemd timers, only thing they lack is error Mail Integration like cron. Which belongs one level up probably, but still…

                                                                                                                            1. 3

                                                                                                                              You can use ExecStopPost= or create template service that will send e-mail on service failure.

                                                                                                                              1. 1

                                                                                                                                You could use https://github.com/kbslabs/open-exec-wrapper (or similar - there’s a few options) as a wrapper which supports more fancy stuff like sending the script error to rollbar.

                                                                                                                                1. 1

                                                                                                                                  I guess the proper way of dealing with this would be to do some kind of integration via DBus to know if a unit failed.

                                                                                                                                1. 3

                                                                                                                                  It’s a shame some Darwin stuff wasn’t handled as blockers. Crystal is broken and I think v8 still doesn’t build :-( There doesn’t seem to be enough people with time to support that system.

                                                                                                                                  1. 3

                                                                                                                                    Darwin is kind of a different release channel and I think it’s fine to not release those in lockstep. The biggest darwin blockers I’ve run into have been the makeWrapper stuff. I’ve been submitting some PRs to address the ones I run into. The maintainers have been pretty receptive when I do so.

                                                                                                                                    1. 1

                                                                                                                                      Personally I’m happy they keep the focus on Linux as that’s usually the environment of production and CI. I wouldn’t be reprised if some Mac folk jump shipped after drinking the Kool-Aid 😉

                                                                                                                                      1. 2

                                                                                                                                        While I use Nixos as my daily driver, it would be nice to have faith that my nix packages would build on Darwin. Our Linux build chain has been rock solid since we moved to a Nix flake and it would have been nice not having wasted the past two weeks tracking down a Mac build issue.

                                                                                                                                        1. 3

                                                                                                                                          But then, especially with flakes, can’t you add a flake input with an older version of nixpkgs, and pick the problematic packages from the older version when they weren’t broken? I would assume that’s doable, though I never tried it yet, so I’m curious…

                                                                                                                                          1. 2

                                                                                                                                            Your understanding matches mine.

                                                                                                                                            I think the main issue is stuff that never worked on Darwin

                                                                                                                                            1. 2

                                                                                                                                              More the stuff that worked on Darwin but new versions don’t. Both V8 and crystal worked before. Then upgrades got merged without validating Darwin.

                                                                                                                                    1. 9

                                                                                                                                      I don’t know if the lessons at the end are meaningful. A power surge that can fry hardware doesn’t care about your software setup or partitions. Electrons go where they want to go. Read-only partitions, or not, I expect things survived by accident only.

                                                                                                                                      1. 5

                                                                                                                                        The EFI partition and its used area may have been small and/or right at the (inner or outer) edge of the disk.

                                                                                                                                        Physical layout could have a hand in the odds

                                                                                                                                      1. 1

                                                                                                                                        In my opinion, server settings are where NixOS shines the most!

                                                                                                                                        I was curious about that one recently. On a Mac if I need to install something, I just do it and if it’s big enough I’ll check back in 10min - it mostly works. (ignoring all the other problems) Maybe run gc once a week and clean up 40GB of trash. But I wouldn’t want to attempt that on an RPi class hardware. How do people deal with that? Some external compilation service/cache? I mean cases where it turns out you’re about to compile llvm on your underpowered Celeron based NAS.

                                                                                                                                        1. 3

                                                                                                                                          Some external compilation service/cache?

                                                                                                                                          Basically yeah, its pretty easy to setup and defer to something else to build: https://sgt.hootr.club/molten-matter/nix-distributed-builds/

                                                                                                                                          1. 1

                                                                                                                                            Unless I’m misunderstanding, you want to target revisions of nixpkgs channels which are already built as per: https://status.nixos.org

                                                                                                                                            1. 1

                                                                                                                                              I’m on a rolling release, but it’s not perfect. Specifically, some updates are not built yet at the time of installation. Also changing since configs means recompiling anyway (for example if you want to enable jit in Ruby)

                                                                                                                                              1. 2

                                                                                                                                                Specifically, some updates are not built yet at the time of installation. I guess for something like a raspberry pi your best option would be to stick to a release instead of nixos-unstable. For unstable or configs which are not in upstream caches you’d need some kind of external cache (which could be your desktop machine), yes.

                                                                                                                                                One pitfall I ran into when using a stable release was using release-21.11 which also includes upgrades which are not yet built, switching to nixos-21.11 solved that.