1. 1

    Can anyone link the 3 patches submitted to lkml mentioned in the paper?

    I had trouble finding them from the little information that is in the paper. That the search on lore.kernel.org doesn’t seem to have a full text index makes this harder.

    1. 2

      We don’t know where those patches are and they are not submitted from umn.edu emails. What follows are only speculations.

      https://lore.kernel.org/linux-nfs/YIEqt8iAPVq8sG+t@sol.localdomain/

      I think that (two of?) the accounts they used were James Bond jameslouisebond@gmail.com (https://lore.kernel.org/lkml/?q=jameslouisebond%40gmail.com) and George Acosta acostag.ubuntu@gmail.com (https://lore.kernel.org/lkml/?q=acostag.ubuntu%40gmail.com). Most of their patches match up very closely with commits they described in their paper:

      Figure 9 = https://lore.kernel.org/lkml/20200809221453.10235-1-jameslouisebond@gmail.com/

      Figure 10 = https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.ubuntu@gmail.com/

      Figure 11 = https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.ubuntu@gmail.com/

      1. 1

        umn.edu​

        I guess they must be in there? These are commits from the main author. https://github.com/torvalds/linux/commits?author=QiushiWu

      1. 4

        I don’t get it.

        It’s bad enough that our web browsers connect to third parties “on our behalf” without our consent or knowledge, but it’s even worse that those third parties can continue to “keep tabs” on us in some way (without browser tabs!) while we’re not even actively web browsing

        How would an open (idle) TCP connection still request updates over HTTP? The only contrived example I can come up would be websockets…

        Also not sure if the author ever heard of TCP socket timeouts and how reliably unreliable they are.

        1. 1

          It can be used to track you across websites, i.e. it is a short lived super cookie. Which also means that the state separation to protect against super cookies should help.

          Even if it does not close the connection, such a protection would need to not reuse the connection for a different website with separated state. Also clearing state (“cookies” in UI speak) for the website for which the connection is left open needs to from then on never use this connection until it is closed at the normal time out. (Closing the connection before the normal timeout sends the information of this user interaction to the server, but I’m not sure if it is worth it to prevent that, so maybe it is fine to just close it early.)

          I don’t know if this is all correctly implemented in Firefox, might be worth it to test and report any deviation as bugs.

          1. 1

            I still don’t really follow. How would anything with cookies (HTTP, stateless) work via a shared TCP connection. I’m completely missing the point where this is not just a hanging connection (where the website under certain circumstances would assume you’re there for 20min longer than you are) and nothing else.

            Also in the super cookie you mentioned wouldn’t the problem be the reuse of the connection and not the “not closing”?

            1. 2

              Yes, it is the reuse. To reuse connections is the reason browsers doesn’t close connections immediately in the first place. Also anything where there is a noticeable behaviour difference, like closing the connection early on some user interaction where it would otherwise idle longer until close.

              Half closed TCP connection, sometimes called hanging, have a different state than fully open, idling connections that can start to transmit at any time again. Normal http-cookies are per request, multiple requests can share the same connection. With HTTP keep-alive in serial fashion, with HTTP version >2 generally in parallel. Super cookies do not rely on http-cookies, sometimes they only track users by measuring timing differences.

              If you never use that web browser again then it makes no difference. But if you reopen it, the process was not terminated (otherwise the connection would have been closed by the kernel), the connection is still idle and then gets reused.

              When the connection to a third party gets reused for a request that is for a different first party than the one it was opened for, that can be used for tracking across the web.

        1. 1

          The author omits that the SSPL doesn’t work for the stated purpose in the article, MongoDB, or Elastic. (As discussed before: No need to accept for use. Not practical to comply with when accepted.)

          The ACSL also doesn’t work for its stated purpose: As a simplification take capitalist as someone setting terms for workers where that someone controls more than 50% of the capital. Lets for example take a manufacturer of clothes who needs software to run the cutting machine for cloth before sewing. They create a company that is fully worker owned, then rent all the machines to that company and provides any other capital they need to operate under the renters conditions. Such a company is clearly operating under terms set by a capitalist, but is able to use the software under this license. The authors of the ACSL seem to recognize this. It seems to be more intended as art than for purpose.

          Many licenses that say they want to be better than open source for some purpose, fail at being fit for that purpose, even more than one of the established license would. I don’t think that would be inherent in such a goal. Though some goals are much more complex to create mechanisms for than others, so that is a reason why people who are serious about such goals usually don’t start with a copyright license.

          Even with its holes the AGPL might still be better than the ACSL, if your intention is to ward off capitalists, as it has a share alike requirement, which the ACSL lacks. Having a license that plugs the SaaS-wraps-FLOSS loophole of the AGPL seems worthwhile either way.

          1. 3

            I do not fully understand the difference between SSPL and AGPL, can someone explain it to me? Is AGPL not considered open source? I think this may be the question to Drew?

            If I understand correctly SSPL would force Amazon to open source anything that touches ElasticSearch eg. their entire platform? Is that right? So that would mean that SSPL violates freedom #0.

            1. 5

              You can check which licenses are considered legitimately open source here: https://opensource.org/licenses/alphabetical

              Here’s a post by the OSI clarifying about the SSPL: https://opensource.org/node/1099

              And here’s the definition of ‘open source’: https://opensource.org/osd

              1. 4

                I’m unsure that the OSI can be treated as honest brokers here. Open Source means what it says, not what OSI say it means, they can try to be a gatekeeper if they want, but noone’s obliged to take them too seriously.

                Take this spew of nonsense (for example):

                What a company may not do is claim or imply that software under a license that has not been approved by the Open Source Initiative, much less a license that does not meet the Open Source Definition, is open source software. It’s deception, plain and simple, to claim that the software has all the benefits and promises of open source when it does not.

                That’s just incorrect, both factually and legally. Saying “license mets the open source definition” or “license is approved by the OSI” would both be deception. Saying “I believe this license is open source” is not.

                1. 1

                  “I believe X” is mostly not a testable assertion, also it could be true while X is wrong. It is also not that interesting to know what MongoDB and Elastic are believing, it is interesting what they did. Thus: What could happen if one were to use software under the SSPL? Do MongoDB or Elastic use software they license under the SSPL?

                  A license like SSPL that nobody can comply with is not a license for you. Thus claiming its an open source license is deceptive. See https://lobste.rs/s/t9kcgy/righteous_expedient_wrong#c_swk45k .

                2. 1

                  Maybe I’m missing something, but I still don’t see any details in that posts, or the links from it, what disqualifies the SSPL (how does it differ from AGPL?)

                  1. 1

                    The post links to a mailing list thread which has further discussion about why the SSPL was not accepted as an open source license.

                  2. 1

                    Thank you, this clears up my initial misunderstanding of what SSPL requires.

                1. 3

                  A license that you can’t comply with isn’t a license. This article entirely ignores that.

                  The SSPL is not a license that is practical to comply with while running the software as a service. Neither MongoDB nor Elastic themselves are able to comply with their own license. Nor are they working on being able to.

                  Nobody can currently run a SASS DB offering with only open source. There will be software in your USB-C cable, your network card, your ethernet switch or something like that. You can’t buy a version of every hardware needed, that works only with open source software.

                  Yes, the reaction from OSI also doesn’t lead to a practical open source license that plugs the SASS-wraps-FOSS loophole. People with power in OSI have been against stronger copyleft.

                  1. 3

                    I must be missing something. This claims that it addresses the type of supply chain attack that bit SolarWinds, and the load bearing defense here appears to be digital signatures. Didn’t the malware that got introduced into the SolarWinds product have a valid digital signature?

                    Maybe I’ll spot what I’m missing after some more coffee.

                    1. 3

                      This claims that it addresses the type of supply chain attack that bit SolarWinds, and the load bearing defense here appears to be digital signatures. Didn’t the malware that got introduced into the SolarWinds product have a valid digital signature?

                      It’s probably too subtle for Before Coffee o’clock.

                      I recommend reading Taylor Hornby’s Triangle of Secure Code Delivery.

                      With Gossamer, it isn’t just signatures. It’s also a transparency log and third-party verification (thus, attestations).

                      What this buys you is more than “don’t run unsigned code”. It’s also “don’t run code unless it was reproduced by $trustedThirdParty from the source code” and/or “don’t run code unless it’s been reviewed by $trustedVendors” too.

                      This adds more mechanisms for mitigating supply chain attacks, as well as unavoidable transparency that prevents stealth operation.

                      1. 2

                        Hi Scott! Been a while :) How does one find developer pub keys in Gossamer?

                        1. 1

                          A few ways come to mind:

                          • Parse the cryptographic ledger from the first block, verifying the integrity of each entry
                          • Running the Synchronizer (local trust), querying the SQL database for non-revoked pub keys for a given provider
                          • Querying the Gossamer Server (/verification-keys/:provider-name)

                          But most people will use the easy button:

                          $gossamerClient->getVerificationKeys('provider-name');
                          

                          This returns an array of strings representing the (hex-encoded, IIRC) Ed25519 public keys currently trusted by the universe.

                          (This actually does up to two of the things I listed, under the hood, depending on config.)

                          1. 1

                            So does trust in a provider’s keys basically come from votes in the transparency log? If so, whose votes do you trust? Is that manual?

                            1. 1

                              New keys are signed by your old keys.

                              Your first key is sorta TOFU but with transparency (since it’s published in the ledger).

                              There is nothing approximating WOT or key signing parties. Looking at it through the PGP lens will lead to confusion.

                              To verify keys in advanced threat models, you need only confirm that peers see the same view of the ledger. You can compare Merkle roots (Trillian) or summary hashes (Chronicle). Or the lazy way: query Gossamer Servers in addition to running the Synchronizer locally.

                              (These ledgers are centralized-publish, decentralized-verify, so we don’t need to deal with distributed consensus.)

                              1. 1

                                so we don’t need to deal with distributed consensus

                                Not in the sense that you need to find agreement, but you still need to notice when you don’t agree to be able to check if this disagreement is a compromise.

                                Is there even a failure that a log without distributed inconsensus detection can detect that non-chained simple signatures won’t provide?

                        2. 2

                          That makes sense. FWIW, I have an easier time considering the merits without thinking about the SolarWinds mess. That combined with my earlier lack of caffeine obfuscated it a bit for me.

                          I like the transparency log, especially.

                          Now we need reproducible builds in a widespread way to give this some teeth.

                          1. 1

                            One of the defined attestation types is reproduced (i.e. for reproducible builds) for this exact reason.

                            1. 1

                              Getting the various tools to cooperate to that end is such a headache.

                              It feels like a relatively small budget spent on sponsoring an easy/reliable way to do it for C and C++ projects that use cmake and autotools would have an outsized impact.

                              1. 1

                                Well, this is starting with PHP first, not a compiled language yet :)

                                1. 1

                                  Add Java to it and you get a pretty decent amount of software more secure.

                        1. 1

                          Under what condition and how can a client detect that another client noticed and proved that the log gave an incorrect answer? When does this other client do the additional work to notice this if not every client does that work?

                          Some incorrect answer of interest: giving a different answer to some clients (split view), an entry that was there before isn’t there any more (truncate), missing entry (as not every client validates the full log), response with an additional entry not seen in the log when walking it, a write was tried but is not seen on trying to read it.

                          1. 2

                            Combining ZFS and Linux is a GPL violation anyway, so Linus could not include it in Linux without violating the GPL unless Oracle gave explicit permission (or an exemption) for this, as Linus alluded to.

                            For more details, including why Canonical is violating the GPL by distributing ZFS in Ubuntu, see https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/ (disclosure: I work for Conservancy).

                            1. 4

                              Combining ZFS and Linux is a GPL violation anyway

                              That’s a strong statement. From what I understand, it’s not allowed to distribute Linux together with ZFS, but building ZFS yourself and using it on your own machine is not a GPL violation, right?

                              Linus could not include it in Linux

                              I’m with you there. But I don’t think anyone here has asked him to include it. Rather, this seems to be about Linus making changes to the kernel that make it harder to get ZFS to work on Linux.

                              1. 1

                                Distributing a combination is not the only problem when dealing with the copyright of ZFS on Linux: While I don’t like it, one can also be held liable for copyright infringement that others committed, e.g. by inducement of it. That means this is also a question for when one were to contribute to or distribute ZFS on Linux without combining it.


                                On a more general matter: It is said, though disputed, that Bryan Cantrill ( on here as @bcantrill ) was one of the biggest proponents of the CDDL. If he were to read this I would like to know from him (and anyone contributing under CDDL, if you care about having/giving a license):

                                1. Do you suggest anyone to use the CDDL for new software?
                                2. Would you like to have existing software under CDDL move to a different license if that was easy?
                                3. Is it worth it to make sure new contributions to existing CDDL software are also available under another license that is less intentionally incompatible with other licenses (like 2-BSD, Apache 2.0 or something)?
                                1. 1

                                  The relevant Wikipedia pretty much answers your questions, including quotes from @bcantrill. https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License

                                  #3 CDDL is generally not incompatible with any OSS license, except MAYBE the GPL. The FSF thinks it’s incompatible, and Linus clearly has a perspective here, but he isn’t really saying it’s a Legal issue, mostly an Oracle is evil issue (which everyone already knows). See the above wikipedia entry for the details. But either way it’s never been tested in court, so it’s still unknown if it’s actually incompatible. Certainly the spirit of both GPL and CDDL licenses are compatible.

                                  Plus CDDL is an interesting license as it’s file based, i.e. it’s attached to individual files, not to a project as a whole. Which makes it unique in the OSS license tree. So you could only make new files in the repository/project dual-licensed. You can’t really change a CDDL licensed file unless you also happen to own the copyright(s) to the entire file, which in the case of OpenZFS is now quite broad, and not limited to Oracle alone.

                                  Basically there is OpenZFS which everyone uses (across multiple different platforms), except Oracle, which nobody uses (unless forced, for non-technical reasons). Oracle can not import any of the OpenZFS changes back into their tree (legally speaking) because the Oracle version is no longer CDDL licensed.

                                  OpenZFS has a lot of awesome features that Oracle can’t import into their version. The latest new feature Oracle can’t import is data encryption on disk.

                                2. 1

                                  That the GPL and CDDL are incompatible is mostly legal opinion at this point. Certainly the Conservancy has an opinion and the FSF has an opinion, which coincides with your statement of “fact”, but it’s never been tested in courts, and plenty of other lawyers have an opposing viewpoint to yours, so much so that Canonical is willing to bet their business on it. More about the various opinions can be found on the CDDL wikipedia page: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License

                                  I think most people can agree that in spirit, both are compatible, to some degree, but there is a difference in that GPL is a project based license, and the CDDL is a file-based license(which makes it unique).

                                  I don’t think either perspective can be called fact until the various court systems have ruled one way or another, and I don’t really see anyone itching to find out enough to dump a team of lawyers in front of the court.

                                  I’m certainly not going to say you are wrong, and Linus has made it very clear he has no intention of incorporating OpenZFS into the Linux tree anytime soon, but I think even if everyone on the planet agreed legally that it could be incorporated I would like to think he(and many others) would hesitate anyway. The Linux tree is already pretty giant, and OpenZFS’s codebase is no slouch either (it’s in the millions of LoC). Plus, there isn’t really a huge benefit in incorporating OpenZFS into the kernel tree, since OpenZFS is cross-OS (Unix, BSD, macOS, Windows, Linux, etc) and the Linux kernel … isn’t.

                                1. 2

                                  One of the comments on the linked site state

                                  However, the change that broke SIMD for ZFS was not a technical one; rather, it was a symbol switching from EXPORT to EXPORT_GPL. From the outside, it seemed a deliberate choice to hamper 3rd party module. And it would be fine even in this case, but it somewhat suprised me.

                                  What exactly does EXPORT_GPL mean? I’m not a kernel dev..

                                  1. 2

                                    It’s a signal from kernel developers that they expect anything using EXPORT_GPL to also be GPL’d code. it’s a legal stance, and not a technical one.

                                    i.e. if you use EXPORT_GPL, then they expect the GPL license (sometimes called “infection”) to apply to your code as well. If you use just EXPORT, then they don’t expect the GPL license to apply to that code.

                                    to be clear: where they is the kernel developers.

                                    1. 2

                                      Symbols which are EXPORTed are considered free to be used by any (out of tree) kernel module, irrespective of the license of that out-of-tree module. “EXPORT_GPL” symbols are intended only to be used by modules licensed under the GPL.

                                      1. 1

                                        There is no such permission given in the license of Linux. I remember at least one Linux copyright holder explicitly saying multiple times that they reserve the right to sue for copyright infringement irrespective of how the symbol is marked.

                                        While EXPORT_GPL shows that there is at least one person who reserves the right to sue for copyright infringement when using in a module under an incompatible license, EXPORT doesn’t tell you anything more than the text of the license (GPL 2) in itself. EXPORT is not EXPORT_MIT or something like that.

                                    1. 1

                                      In my opinion google cloud platform UI sucks like it takes so long to get service up and running. I remember one time I met a someone who worked at google said the same thing since it’s new in the market and trying there best to make the services better. But on the other hand Amazon Web Services is really smooth and it’s really easy to get started with it. I haven’t use other cloud services but I’m sure each one has there pros and cons

                                      1. 1

                                        Has anyone tried openstack by red hat

                                        1. 1

                                          Isn’t openstack by Canonical?

                                          I set up a small openstack instance this past summer and while it was a royal pain to set up it seemed fairly well thought out. The performance was not amazing but not outside the expected performance of an cloud platform. The user experience is about the same as every other cloud platform after everything is set up. One thing I never got completely sorted was how to set up metrics. There’s probably a service for it, but I couldn’t figure out how to get a dashboard with actual (as opposed to provisioned and unused) cpu and memory stats.

                                          1. 1

                                            See above. It’s an independent open source meta-project contributed to by many large players, but its popularity is on the wane in favor of containers which are much easier to manage.

                                          2. 1

                                            Openstack is neither redhat nor canonical. It’s an open source project backed by a number of large vendors, but, to be honest, every single time I talk to anyone who’s implemented it in their day job, they said the same thing: Initial build goes great but the upgrade story is a nightmare, and it’s actually a constellation of separate largely independent projects with varying levels of contribution by developers so the amount of polish varies.

                                            The industry is trending towards container based solutions, and RedHat has a very nice container clustering solution for large scale deployments called Openshift.

                                            No skin in this game since I don’t use any of them, just passing along what I hear from folks who do.

                                            1. 1

                                              OP mentioned OVH which uses OpenStack: https://www.openstack.org/marketplace/public-clouds/ovh-group/ovh-public-cloud

                                              Wikimedia runs OpenStack for its community: https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS

                                              I’d recommend OpenStack based over proprietary clouds, but I’m biased. But if one manages an OpenStack installation badly, the experience can be bad.

                                              This thread mentioned containers (OpenShift is based on Kubernetes, which is for scheduling/orchestrating containers) which some people run on top of VMs (I have seen this with OpenShift on OpenStack Nova which uses KVM but also others). If you need VMs, in the long run I’d recommend the other way around: VMs in Containers. Two interesting projects in that space: KubeVirt is more light weight and runs KVM in Kubernetes. Airship is a way to install and update Kubernetes which also can run among others OpenStack in that Kubernetes.

                                          1. 1

                                            TL;DR: add an Incorrect flag if that makes more people vote; but that won’t be enough, the scoring algorithm needs to be fixed, needs to be made more lobster like.

                                            Incorrect submissions are also spam. Spam is something unsolicited. Incorrect submissions are not explicitly solicited. Therefore they are also spam. That doesn’t require that you can identify a thing that is being promoted.

                                            You mentioned reputable sources but did not say how you define reputable and whether that is independently testable. Reputation can not replace testing an argument, but if done correctly it can help coordination to find things that are worth testing. I consider incorrect thus a more useful concept here.

                                            What is the goal of flagging? Some reason may be a negative score to filter <0 and/or some admin reaction. Anything else?

                                            Many incorrect submissions still keep a positive score, even after that is shown in the comments. A more specific flag than spam will not necessarily make them go negative. That is not a reason to not add an alias or more specific flag if that makes more humans vote at all on a story.

                                            It happens that incorrect information is scored high on this site. One might want to use a better reputation algorithm to make it easier to find scientifically correct submissions and not waste time with things that are incorrect. One that improves quality over time without resorting to authority or majority-rule. It needs to work on something more specific than one score/tally, like if a tag applies (or even a more expressive assertion). It would require calculating the reputation individually for a user based on who they trust to use a tag correctly. Which is a more lobster like approach, as they don’t form a shared hierarchy but each their own view of it. The trust calculation needs to be able to differentiate between trust for different tags. This trust between two humans needs to be calculated based on tags/assertions they put in this system, not directly. It needs to be able to keep by user choice some input private. It needs to take into account if people acknowledge their errors. (To answer if I make an adversarial collaboration with this person.) It needs a way to avoid Sybil attacks. Explicitly tagging things as “correct scientific argument” is needed to make the goal more obvious and to not exclude fiction, jokes, and allow any categorisation goal. It should be able to tag only a part of a submission.

                                            A good collaborative categorisation system makes it possible to take into account when people agree on a specific assertion without requiring agreement on other or a more general assertions. (The current one fails this by requiring agreement on a the very unspecific “up or downvote”.)

                                            1. 1

                                              It looks like you don’t want Free Software, but a communist revolution.

                                              1. 2

                                                You can’t have one without the other.

                                                1. 1

                                                  Your above assertion is not specific enough to test or falsify.

                                                  Karl Marx didn’t know about falsification (thus science) yet as that was only described after his death. You could write arguments that others can check instead of merely taking your word for it.

                                                  In the article you say:

                                                  users and corporations do not have aligned incentives […] therefore, so long as capital has the ability & incentive to influence software production, software will be pitted against users.

                                                  This is circular reasoning or perhaps a tautology. You give the conclusion as a reason for the conclusion.

                                                  Quirky, personal software that is aggressively unscalable

                                                  Software for humans (as opposed to only for some humans but not others) is required to be more scaleable than merely commercial software as there are more than 7G of them. Commercial software merely needs to work with enough humans to extract money, it can ignore the rest.

                                                  What is more important to you: Human needs/rights or the inability of the software to be used by corporations?

                                                  Is a communist revolution your goal in itself or is that only your suggested solution? Are there some goals you have where if some other idea could reach them, it would be fine for you not to get a communist revolution?

                                                  1. 3

                                                    Karl Marx didn’t know about falsification (thus science) yet as that was only described after his death. You could write arguments that others can check instead of merely taking your word for it.

                                                    I didn’t mention Marx in this essay. However – your statement is incorrect. Marx wrote during the industrial revolution, & his explicitly stated goal was to remove the woo from existing socialist movements (like the fabians) by sticking to rigorously scientific analyses of historical economic data & construct falsifiable models of economics. This is what is meant by the term ‘historical materialism’: an attempt to bring economics out of the realm of Smith’s just-so stories & into the realm of science, submitting it to the same rigors that were then being applied to thermodynamics.

                                                    (Last I checked, the scientific method was attributed to Sir Francis Bacon, a few centuries before Marx was born. The nineteenth century was the absolute peak of the kind of scientific reasoning you describe – named and documented by Popper a half century after its moment ended in the wake of the virtual demolition of the analytic movement in philosophy by Godel & Wittgenstein.)

                                                    This is circular reasoning

                                                    No, I just thought that the incentive misalignment was so obvious that I didn’t need to specify it.

                                                    Software for humans […] is required to be more scaleable

                                                    This point is fair, so long as you assume the existing division between developer and non-developer. I’ve written at length elsewhere that this division is arbitrary, problematic, and a form of rentseeking, & ought to be abolished. Since you haven’t read those pieces, I can’t blame you for thinking that this is a flaw in reasoning.

                                                    What is more important to you: Human needs/rights or the inability of the software to be used by corporations?

                                                    Human needs. (After all, like the rest of us, I write software for corporations all day. I maintain that it is problematic in the long run, but in the short run it is necessary for my own survival.)

                                                    Is a communist revolution your goal in itself

                                                    I’m not really an advocate of revolution per-se. It’s messy & often ineffective, since military structure, while necessary for war, runs counter to an equitable arrangement of people. Gradualism has other problems, but luckily there are solutions that are neither revolutionary nor gradual. My own preference is the scheme outlined in bolo’bolo: the construction of alternatives, independent of existing systems, which people can move to freely. Popular alternatives starve incumbents of labor & resources. In the late stages this is almost guaranteed to result in direct attacks, but by that point, if things are managed well, the incumbents will be too weak to do much damage.

                                                    1. 1

                                                      Let me repeat my first sentence with a different focus:

                                                      You didn’t refer to a specific model of economics.

                                                      You can’t have one without the other.

                                                      If you specified a correct economic model and matching implementation, that isn’t yet sufficient for that to follow. For your claim, you’ll need to specify properties without which Free Software can not occur. With enough precision so that it is easy to say if something matches those properties or not.

                                                      Marx […] construct falsifiable models of economics.

                                                      One can state something falsifiable without explicitly having defined falsifiability or deciding on using it for demarcation to non-science. I only mentioned Marx to be a bit more specific than mentioning the word communism. Marx certainly did make non-falsifiable claims in addition to falsifiable ones.

                                                      AFAIK Sir Karl Popper defined falsifiability long after Marx was dead and suggested anything not falsifiable is not science. Francis Bacon didn’t require falsifiability.

                                                      No, I just thought that the incentive misalignment was so obvious that I didn’t need to specify it.

                                                      There are certainly obvious cases of incentive misalignment. What is not obvious is that in all past, current and possible future systems in our reality, “users and corporations do not have aligned incentives”. (For it to be falsifiable you need to specify how to detect a corporation. Does Alice and Bobs collusion constitute a corporation?)

                                                      Software for humans […] is required to be more scaleable

                                                      This point is fair, so long as you assume the existing division between developer and non-developer.

                                                      […] this division is […] a form of rentseeking […]

                                                      Doesn’t matter under what economic system or how the labour on that software is organised. That does not change that a human would need to be able to run compatible protocols together with over 7G humans for them to be able to not exclude some of them. Otherwise: Sorry, no internet1 for you, its full. Make your own internet2, which won’t fit everyone either. So some won’t be able to directly talk to some others. This would create the opportunity for those in both internets to seek rent for forwarding. Therefore the non-scalable property enabled what you said to avoid.

                                                      Unscalable software would compromise the rights of some humans. Thus it does not serve your stated goals.

                                                      1. 3

                                                        If you specified a correct economic model and matching implementation, that isn’t yet sufficient for that to follow. For your claim, you’ll need to specify properties without which Free Software can not occur. With enough precision so that it is easy to say if something matches those properties or not.

                                                        You’re applying a lot more rigor to a flippant response (to someone’s flippant comment) than to the article itself, wherein I specify exactly what I mean by free software and exactly how it’s currently being prevented & a few ways to close the loopholes that are being exploited.

                                                        The goal of free software is to align the incentives of software developers and software users by making them the same group. (If we want to bring Marx into this at all, we have a convenient place to do it: he defines communism as any state of affairs in which the incentives of management and labor are aligned because management and labor are the same group. I wasn’t drawing on Marx in this essay but on Debord, so outside of this point that’s a much more suitable comparison to make.)

                                                        The labor of software use always falls upon individuals, so user incentives are best defined by individual incentives. Because almost all individual software users sell their bodies to corporations for at least eight hours a day, during which they are required to perform actions in the ways demanded by corporations rather than in the ways they would naturally perform those actions, points of friction become biased based on standard procedures. It is only in the interests of a corporation to lubricate the points of friction that occur when performing corporate-controlled work, and then only when the people who encounter those points of friction are insufficiently disposable.

                                                        Marx certainly did make non-falsifiable claims in addition to falsifiable ones.

                                                        Sure, as does everybody. And, Marx was wrong on a number of points (among them, he thought that ‘capitalism with a human face’ was impossible & so thought that the internal contradictions of capitalism would cause it to collapse substantially faster).

                                                        He had, as an explicit goal, to be scientific – by which he meant creating systematic models of the entire relevant portion of the problem & testing them against both historical data & new experiments, eschewing metaphysical ideas and wishful thinking in favor of mathematics & mechanisms, and emphasizing the importance of revising models when they have failed to be predictive even when the resulting new model is complicated by this collision between theory and practice. In other words, in terms of Popper’s idea of science (which itself is fairly controversial), Marx’s economics scores higher than many current fields (including practically all of medicine outside of drug testing).

                                                        There are certainly obvious cases of incentive misalignment. What is not obvious is that in all past, current and possible future systems in our reality, “users and corporations do not have aligned incentives”.

                                                        If we privilege the needs of individuals over the needs of corporations, then any time the two come into conflict and the corporation wins, it is an unnecessary tragedy. When their needs are aligned, it is irrelevant: the equivalent number of individuals coming together in a different structure would have the same power to fulfill that need. When they are not aligned, corporations tend to win, because that is what corporations are for.

                                                        you need to specify how to detect a corporation

                                                        The usual way will suffice. A corporation is a legal entity that can hold capital and resources the way a person does, and can usually be sued in the way a person can be, in such a way that it forms a buffer between an individual and liability. A corporation continues to exist so long as it periodically pays the government a registration fee, unless it is dissolved by its owners.

                                                        This means a corporation always has an incentive to accumulate capital, because accumulated capital is capable of protecting it from all possible risks, & any mechanism for accumulating capital that does not risk losing more capital than the company can afford makes sense (i.e., fines for illegal activity are part of the cost of business & bad PR can be offset by marketing). It also means that the owners of a corporation are incentivized to dissolve a corporation & run off with its capital if they think it’s no longer doing its job. A corporation is a disposable shield that is immortal so long as you keep feeding it, but gets hungrier as it gets bigger: an ideal tool for protecting individuals from the voracious maw of the market.

                                                        a human would need to be able to run compatible protocols together

                                                        I’m not opposed to the concept of protocols. I’ve got a problem with monocultures.

                                                        Software scalability matters a lot with regard to centralization. If one organization owns all the machines for doing a thing, then those machines need to be efficient, because the margins are the difference between skimming enough profit to survive and slowly bleeding to death. In a decentralized system, most of the things we mean by scalability don’t matter: one person isn’t doing maintenance for a thousand machines but for one machine, so it makes more sense for that one person to be comfortable with their one machine than it does for all thousand machines to be interchangable; one person isn’t moderating the feeds of all of malaysia, but instead, everybody tends their own moderation and perhaps helps out with the moderation of their friends.

                                                        I’m ultimately less interested in opening up low-level protocols than front ends, because the shape of behavior is controlled by the adjacent possible. I go into this in detail in other essays, but where it intersects with this topic is: when you make software for employees, the people who use it are biased toward acting and thinking like employees, which is a very narrow slice of all the different hats one could wear; likewise, when everybody runs the same software, they think more alike, because their imagination is shaped by their habits.

                                                        We know from observing users that they are capable of using an extremely flawed folk-understanding in conjunction with trial and error to produce effective, if convoluted, mechanisms for getting things done with inadequate tooling & inadequate documentation. In other words: all humans are born hackers, and the difference between somebody who rigs up some incredible hairball of an excel spreadsheet to do something & somebody who does it in perl is not a matter of inherent ability or even really skill but of access to the ability to imagine better tooling (i.e., a matter of exposure, education, & identity). Just as MySpace’s willingness to host arbitrary CSS led to often-messy but incredibly personal and expressive profile pages, making the resources available and visible by default to everyone to modify all the code on their system will result in even self-professed ‘non-technical’ users adapting their systems to fit their preferences in extreme ways – and the more available we make that ability to imagine a better fit, the greater variety & the closer fit we will see. (In other words, this is a natural extension of free software: rather than collapsing the distinction between ‘a developer’ and ‘the developer’, truly collapse the distinction between ‘developer’ and ‘user’.)

                                                        Even opening up the whole system, we should expect to provide reasonable defaults, because writing an IP stack is an acquired taste – I don’t expect many people to dig that deep when optimizing for a real-world task. Even so, monocultures of implementation are dangerous in many ways, so we ought to have more fully independent reimplementations of just about every protocol. If a protocol cannot be trivially reimplemented, then it has failed at being a protocol. Vary every behavior not specified with ‘must’, bring out the nasal demons, etc: the ecosystem will improve because of it.

                                                        Computer software is made for and by people who already like computers, and this prevents problems that are obvious to other groups from being solved. Require less initial buy-in at the level of software creation and you’ll get computers that are worth liking.

                                                        some won’t be able to directly talk to some others

                                                        If centralized social media has taught us anything, it ought to be that people don’t really want to open themselves up to being talked at directly by seven billion strangers, because scale of direct contact amplifies the power of griefers and low-effort shitposting a lot more than it amplifies useful forms of communication.

                                                        SSB has the best model I’ve seen in action. You’re responsible for managing your own feed, but visibility is based on the existing social graph & is asymmetric. Basically, so long as you don’t follow pubs, you can keep the presence of hostile randos looking to bother you down to a dull roar without putting the labor of that on some overworked moderator. Visibility & access follows the usual course of human connectedness, & tactics created by a fully flat network like sealioning and swarming don’t really work.

                                                        1. 2

                                                          Lots to chew over here. Thanks for taking the time to write!

                                              1. 4

                                                The author gives no alternative to the web of trust or their arguments against it do not hold up during scrutiny.

                                                Long term identities could be built on rotating keys. Yes, this is one of the many areas where PGP lacks (with its identity key rotation not being user friendly). This is also where Signal lacks (binding long term identity to the phone number only instead of allowing chaining of keys).

                                                None of this identity goop works. Not the key signing web of trust […]

                                                Yet they give no details in which way it doesn’t work. For finding the key of someone you haven’t met, the web of trust idea improves security over trust on first use. Man in the middle attacks work for trust on first use, but don’t on web of trust.

                                                Experts don’t trust keys they haven’t exchanged personally. Everyone else relies on centralized authorities to distribute keys.

                                                They just mentioned web of trust in the same paragraph… What do they think of people who do use the web of trust?

                                                Yes the usability and privacy of web of trust can be improved. See e.g. https://claimchain.github.io/ . Getting introduced to other people is something many people do in real life and did before the Internet existed.

                                                1. 12

                                                  At this time, it’s up to the proponents of web-of-trust to prove that it’s a workable concept, and not a theoretical construct that doesn’t work in today’s world.

                                                  The recent brouhaha over keyservers shows that the infrastructure at least is sorely lacking.

                                                  1. 3

                                                    At this time, it’s up to the proponents of web-of-trust to prove that it’s a workable concept, and not a theoretical construct that doesn’t work in today’s world.

                                                    I personally believe it is a better option than centralized services (remember StartSSL ?) if you use TLS client certificates – see weird CAcert – or OpenSSH keys – see Monkeysphere.

                                                    1. 3

                                                      At this time, it’s up to the proponents of web-of-trust to prove that it’s a workable concept, and not a theoretical construct that doesn’t work in today’s world.

                                                      Some open-source projects use Web of Trust, for example Arch Linux or kernel.org. IIRC Debian also requires their developers to have “strongly connected” keys.

                                                  1. 3

                                                    Would you have wanted to use your marketing tag on it if the submitted link was https://github.com/pilosa/pilosa with the title it has there?

                                                    Next to the usual information Github extracts, it includes links to an explanation of the data model and query language and links to the implicit forecast offering collaboration under terms that are inbound=outbound.

                                                    The vote on a submission conflates multiple things. Among others it contains the presentation of the topic in the submission and the care of the submitter to select the correct link and title for the topic. If everyone voting agreed on all aspects, but not how to collapse those into one vote, then the overall vote score becomes less useful to gauge usefulness to everyone.

                                                    1. 2

                                                      Well, a straight up link to a repository of a commercial product is still marketing.

                                                      Though an article explaining in detail the implementation of such product, without any obvious “call to action”, would be in an acceptable gray zone, in my opinion.

                                                    1. 4

                                                      From the linked advice page:

                                                      Avoid Safari and Firefox. Under no circumstances use the Tor browser (it’s okay to use Tor, but do it with Chrome, and seek additional training on how to set it up).

                                                      I guess Chrome was chosen for U2F reasons.. well, thankfully a few days ago Firefox enabled security.webauth.u2f for all users out of the box, and Google registration works :)

                                                      But.. what the hell is that second part?

                                                      I myself use Tor in regular Firefox most of the time, because I don’t need anonymity and all I want is to obscure my home IP address, but Tor Browser is THE ONLY way to achieve anonymity. Only Tor Browser goes out of its way to defend against all known fingerprinting methods. Why would anyone say to NEVER use it?!?

                                                      1. 9

                                                        I’m not really in a position to endorse or dispute these opinions, but I will relay them:

                                                        1. Thomas Ptacek said Tor Browser was possibly the least secure browser, though he didn’t elaborate nearly as much as I wish he had. However I do gather that is/was a common opinion https://news.ycombinator.com/item?id=14251139
                                                        2. Exploit broker The Grugq argues that using Tor Browser puts a bit fat “target me” sign on you.

                                                        P.S I do think the Firefox advice is probably dated. They’ve made a lot of progress.

                                                        1. 6

                                                          Fingerprinting isn’t a problem in this specific threat model. Being a day late with security patches is a huge one.

                                                          1. 7

                                                            More precisely: these users are subject to targeted attacks (to steal their money or discredit their campaign). Tor browser protects you from global, passive attacks.

                                                            1. 3

                                                              AFAIK Tor does not protect against a global passive adversary. See e.g. https://www.torproject.org/docs/faq.html.en#AttacksOnOnionRouting or https://arxiv.org/pdf/1703.00536v1.pdf The Loopix Anonymity System, Table 2 compares anonymity systems on page 13.

                                                              1. 4

                                                                Tor protects you from a global passive adversary in the same way that body armor protects you from bullets.

                                                                You might still prefer not to get shot at…

                                                          2. 9

                                                            He has a bad habit of doing argument from authority on stuff like that. Ego tripping. If one wants to save time, better to have links they can quickly pull up for any topic. Then, the audience gets enough information to evaluate the claim for themselves while the person helping them gets it done quickly. In this case, it appears the Tor Browser has vulnerabilities the regular browser doesn’t have due to update lag.

                                                            His mention of collecting high-value targets is implying that those targeting them are incentivized to spend large sums of money on exploits for attacking them. Probably already have them for major browsers. Last thing you want to be is a possible, high-value target using an unpatched version of a tech they have exploits for. It makes things easier, not harder, for the high-strength attackers. If you use Tor, it should be with the most up-to-date components. If concerned for fingerprinting, use it on a vanilla-looking OS or configuration that’s really popular. If that is risky, adjust your usage habits accordingly.

                                                            1. 9

                                                              The context is that he’s talking to non sophisticated users who are worried about being hacked, not trying to convince people who already have opinions about information security. I don’t think there’s a way around presenting that kind of piece as an appeal to authority.

                                                              I’d personally get more out of an in-depth companion piece, but it’s not really relevant to his goals.

                                                              1. 4

                                                                The context of tptacek’s recommendation is Hacker News where most users are technical, he had detailed information on lots of topics, he became a celebrity (their No 1), and he since does dismissals of counterpounts without evidence all the time. Occasinally, he references his status or connections as reason to listen. I always told him none of it matters to me: evidence first whether obscure or famous.

                                                                That he’ll spend a lot of time in the discussions but argue around providing evidence shows it’s an ego thing. I got my karma there initially by countering such celebrities with claims linking to evidence. I think the RSA patents argument was closest he did to providing a pile of citations. I had to work to get that out of him. I always had to nearly force him to provide evidence or he just disappeared the second I did myself like secure browser debate.

                                                                1. 6

                                                                  I think we’re referencing different people there. I meant Maciej, who I took to be the person providing the advice page (I suspect he probably conferred with Thomas about it, but I think it’s still in his name).

                                                                  As for Thomas, I definitely would prefer if more of his comments were longer and provided more justification. However, it’s not like he’s given no justification in various threads. It’s true that Tor Browser had a weird update cycle, it’s true that it was a potential target mark/monoculture for sensitive targets, and it’s also true that Firefox didn’t have as much sand boxing back in 2017.

                                                                  1. 9

                                                                    yeah, this isn’t accurate, and I’ve tangled with tptacek any number of times over there. Also, maybe don’t import that bullshit over here, theres no need, whatsoever, to run through everyones grievances with other accounts on a completely separate, and at least over here, highly disliked website. Its not an ego thing, for one, and for another, given that I’m someone who has absolutely been in a position to care about things like this, I’m grateful he does what he does over there. Everything in regard to computer security, from him, in regard to things I care about enough to follow up on, has proven to be correct.

                                                                    1. 3

                                                                      Oh sure. If I wanted drama, I’d have tagged him in the comment. I’d rather not bring drama here. Just letting the other commenter know the omission was deliberate and to just do their own digging when he does that.

                                                                2. 4

                                                                  He covers this in this post, about providing simple answers that cover the most ground to avoid decision paralysis. I think in other communication channels he’ll be more willing to talk details, but “just buy an iphone” is to a first degree the best advice in this context, as well as “just use chrome” (it’s all in google docs anyways!)

                                                                  The security issue with the Tor Browser is extremely bad. I can sit around ,wait for a FF exploit, and immediately use it on a bunch of people for probably at least 24 hours. It’s so dangerous for any political campaign

                                                              1. 10

                                                                To re-enable all disabled non-system addons you can do the following. I am not responsible if this fucks up your install:

                                                                Open the browser console by hitting ctrl-shift-j

                                                                Copy and paste the following code, hit enter. Until mozilla fixes the problem you will need to redo this once every 24 hours:

                                                                // Re-enable *all* extensions
                                                                
                                                                async function set_addons_as_signed() {
                                                                    Components.utils.import("resource://gre/modules/addons/XPIDatabase.jsm");
                                                                    Components.utils.import("resource://gre/modules/AddonManager.jsm");
                                                                    let addons = await XPIDatabase.getAddonList(a => true);
                                                                
                                                                    for (let addon of addons) {
                                                                        // The add-on might have vanished, we'll catch that on the next startup
                                                                        if (!addon._sourceBundle.exists())
                                                                            continue;
                                                                
                                                                        if( addon.signedState != AddonManager.SIGNEDSTATE_UNKNOWN )
                                                                            continue;
                                                                
                                                                        addon.signedState = AddonManager.SIGNEDSTATE_NOT_REQUIRED;
                                                                        AddonManagerPrivate.callAddonListeners("onPropertyChanged",
                                                                                                                addon.wrapper,
                                                                                                                ["signedState"]);
                                                                
                                                                        await XPIDatabase.updateAddonDisabledState(addon);
                                                                
                                                                    }
                                                                    XPIDatabase.saveChanges();
                                                                }
                                                                
                                                                set_addons_as_signed();
                                                                

                                                                Edit: Cleaned up code slightly

                                                                1. 11

                                                                  Or, just go get the hotfix directly and install it. This also worked for my Firefox Android install.

                                                                  https://storage.googleapis.com/moz-fx-normandy-prod-addons/extensions/hotfix-update-xpi-intermediate%40mozilla.com-1.0.2-signed.xpi

                                                                  1. 1

                                                                    Absolutely the better solution now that that exists!

                                                                  2. 5

                                                                    To get the command input line in the browser console one might need to set devtools.chrome.enabled in about:config to true.

                                                                    1. 1

                                                                      will this affect the addon signature check once Mozilla will resolve the issue? the folks at Mozilla must be having a hard time, I just woke up to a addon-less browser, and seems the issue is pretty widespread.

                                                                      1. 1

                                                                        It shouldn’t, but I can’t make any guarantees. I certainly wouldn’t complain to Mozilla if something broke after.

                                                                        It’s basically an adapted version of the verifySignatures function, except instead of setting signedState to the result of the signature check it sets it to a ‘doesn’t need a signature’ value if it is currently at a ‘couldn’t verify signature value’.

                                                                    1. 20

                                                                      This is browser.send_pings in about:config, but Mozilla doesn’t make it sound like this will stick around.

                                                                      The flip side of existing tracking decreasing user performance is that leaving pings off increases the costs and unreliability of tracking. If Mozilla sees existing tracking is a performance issue, why not improve performance even more by blocking it natively? Their response sounds like learned helplessness. I don’t understand this decision at all.

                                                                      1. 7

                                                                        I assume that the domain blacklist that Firefox uses for content blocking will apply to the ping attribute too?

                                                                        One of the big use cases I can imagine for this attribute is outgoing links on a site like Reddit, which counts clicking an article header as a sort of upvote. Right now, they do it with a redirect. There’s no way Firefox could figure out that https://reddit.com/outgoing/AESTNEQUWFPI is going to redirect to https://lobste.rs/, assuming Reddit wanted to be difficult to bypass, so it’s kind of inevitable that they’re going to be able to track this click (also, the tracking should be very reliable, and not very expensive). The ping attribute doesn’t really make it much cheaper, either. All it does is make it faster.

                                                                        1. 11

                                                                          One of the big use cases I can imagine for this attribute is outgoing links on a site like Reddit, which counts clicking an article header as a sort of upvote. Right now, they do it with a redirect.

                                                                          Yeah this use case (“use a redirect so we can track clicks”) is super common and I consider ping= to be preferable on almost every count:

                                                                          • link hrefs become accurate and unobscured;
                                                                          • instead of a roundtrip to get the Location header, you go straight to the new page (and you might get benefits from dns or link prefetching);
                                                                          • you can concretely identify tracking requests by their content-type; and you could potentially use this to block them (I’m not sure if they can be captured by a firefox extension, but that seems a plausible route for blocking them).

                                                                          This is also better than sendBeacon for this use case, IMO. With sendBeacon you have to roll your own interceptor, and hook every link click. This doesn’t totally replace beacons, but it dramatically improves performance for tracking exit link clicks.

                                                                          1. 3

                                                                            I wouldn’t be surprised if the tracking redirect never goes away, because it’s 100% reliable, and ping won’t be (legacy browsers, ad blockers…)

                                                                          2. 3

                                                                            :(

                                                                            The thing is, with URL spoofing, at least I see that the click will be tracked, and can decide to not click it. Whereas with the ping attr, I’ll have no slightest idea :(

                                                                            Unless maybe if the browser would show both the URLs in the status bar when I hover my mouse over a link with the oing attr? Did someone suggest such an UX to Mozilla already?

                                                                          3. 2

                                                                            Does Firefox content blocking cover what you are asking for? I would expect that the hyperlink ping functionality will be subject to content blocking.

                                                                            1. 1

                                                                              I do have this on, and uBlock Origin with blocklists to cover trackers. I hope these are caught by one or both.

                                                                          1. 8

                                                                            (I’m the author of the PR)

                                                                            Keybase is a public key database, and one of the things they have added that other large public key databases (such as SKS and PGP) lack is the ability to tie other accounts across the internet to your public key. This reduces the chances of someone MITMing your communication with someone else.

                                                                            This being said, I also rarely use keybase itself, but I had some free time and I wanted to see what ruby on rails was like.

                                                                            1. 7

                                                                              They didn’t need to add the ability to tie other accounts across the internet to your public key. Signing your username would amount to that with a public key lookup. Just put it in your bio. No central server needed.

                                                                              1. 2

                                                                                That’s what most users have been doing, too.

                                                                              2. 2

                                                                                So why not something open like the MIT keyserver instead?

                                                                                1. 4

                                                                                  I’m not sure what you’re asking here. (And the MIT keyserver is actually just part of the SKS pool)

                                                                                2. 2

                                                                                  In OpenPGP to tie other accounts across the internet to your public key is not the responsibility of the public key transport / database / directory, but the clients based on information in the public key. While an OpenPGP public key conflates the account on some service with the Name of the person and possibly some other things called identity, it does support it. It is specified in the OpenPGP RFC4880 under the name User ID Packet which is called an identity in the GPG CLI. Usually people use it to tie a GPG key to an email address, but the specification does not restrict it to only email accounts.

                                                                                1. 12

                                                                                  I would find Keybase integration useful. I’m in the core audience: People who have a specific reason to worry about others trying to impersonate their friends, in my case due to political activity and online harassment mobs.

                                                                                  I share the concerns that others have about it being a for-profit company, but there’s simply no comparable service. The closest open source equivalent is the PGP web of trust, which… I might someday understand, but it will never be usable enough that I can teach others how to use it.

                                                                                  1. 2

                                                                                    AFAIK Keybase doesn’t support 1) storing whether you verified a persons identity e.g. by exchanging fingerprints in person nor 2) finding a possible path in a web of trust nor 3) storing and calculating if you trust the people from 2.

                                                                                    Thus I think the closest GPG equivalent of how to handle keys is trust on first use and keeping your keyring in git without any web of trust and without signing others keys.

                                                                                    1. 2

                                                                                      Agreed. However, verifying keys isn’t the whole picture. Verifying social media profiles is also important.

                                                                                  1. 9

                                                                                    I may be coming in to this conversation a bit late, but I work on the Keybase project. I’m keybase.io/chris on keybase, not that I can prove that easily here (yet :-) ). This might help: https://chris.keybase.pub/lobsters.txt . Some people asked me to come here and offer support for this.

                                                                                    We and plenty of others have already written extensively about why PGP is inadequate (especially the old keyserver model, which is unusable for most people and dangerous in certain ways), and (2) why we need to find a way to let people cryptographically prove connections between services and keys, using usable software. So I won’t add more here. But a few specific thoughts:

                                                                                    (1) I personally would want to look at a user in Keybase and, if they’re interested, know who they are here on lobste.rs. I only lurk here occasionally, but I feel like I can learn a lot more about a person’s identity following them here than, say, to other services.

                                                                                    (2) I’d like to see transitive connections between something like their personal site, their GitHub (or upcoming other git integrations), and lobste.rs.

                                                                                    (3) The PR actually isn’t complicated, and it doesn’t prevent connections to other services, cryptographic or not.

                                                                                    (4) There’s not much I can say to the “Keybase is a private company” thing. Not all companies are evil. But answering that is like responding when my brother tells me to relax. I AM FREAKING RELAXED DAN. And yeah, Keybase is funded in the traditional model, and tbh we couldn’t have gotten it to where it is without that model.

                                                                                    (5) There are different forms of Internet idealism that share a common user base : decentralization, privacy, security, “freedom” (as in free), etc. These are all different ideals that we idealists are all pursuing. But almost no one is tackling every single one at once with a project, and if so, good luck to that project. One of my least favorite things is when the different attempts to solve these problems with the Internet don’t satisfy each other on every axis, and halt the advancement of all of them. You can imagine lobste.rs being mad that Keybase is a company, Keybase being mad lobste.rs data isn’t digitally signed (or its chat encrypted), IPFS being mad both aren’t decentralized enough. Everyone mad at everyone. But all of us can help each other.

                                                                                    Like I said, there’s nothing I can say to this, except I’d love to see Keybase<>Lobsters, and if you do it, I’d encourage you to remove the integration as soon as you feel Keybase is sucky. I bet that wouldn’t happen.

                                                                                    1. 5
                                                                                      1. “PGP sucks and we need something with a better UI!”
                                                                                      2. Someone makes something with a better UI.
                                                                                      3. “But this isn’t 100% developed in a way I like it, we can’t use this!”

                                                                                      And nothing ever changes and we’re still stuck with [pg]pg that even many tech people can’t figure out… 🤷

                                                                                      1. 4

                                                                                        Hi, Chris. Thank you.

                                                                                        My concern about “Keybase is a private company” is that it will compel your developers to make decisions that are technically weak. I’m not just talking about the server source code here–though that is huge.

                                                                                        I am going to construct some specific questions as examples of a class of questions that I think are important. (I invite other crustaceans to ask questions, while we’ve got Chris’s ear.)

                                                                                        Is or is not the Keybase company willing to make a technical improvement to the chat protocol which would eliminate the company’s ability to measure user engagement but increase user security?

                                                                                        Would the Keybase company merge a PR into the official client which added a UI to the that presents an option for connecting to an alternative server?

                                                                                        What happens to files I have stored in KBFS and my contacts list, etc if Facebook buys the Keybase company? Would the company merge a PR now that strengthens users ownership of their data, even if doing so makes Keybase a less attractive acquisition?

                                                                                        1. 6

                                                                                          edit: formatting

                                                                                          • yes, I would choose user security over user engagement tracking. Working on Keybase the product is a nightmare for us, from a UX management standpoint. That is, compared to previous work we’ve done, where we had a lot of good tools at our disposal. if you look through our client you’ll see there’s nothing in there that exists to serve tracking purposes. Everything is about trying to make usable, cryptography. Any compromises are typically an internal conflict between convenience and security (which is the real dilemma, not tracking and security). Heck, even our website doesn’t have google analytics or any 3rd party hosted JS. Lobstahs and Keybase FTW.

                                                                                          • in spirit yes, in practice no. I fear shooting myself in the foot here by admitting that to the people who place decentralization on a higher pedestal than encryption, when forced to choose. But our biggest fear would actually be security related. We’re suuuuuper scared of most PR’s. Even small things, like a few lines, we end up re-writing from scratch ourselves. I honestly don’t care about hosting Keybase’s data. It would be cheaper/cooler if a user could host it elsewhere. To be clear, though, the second biggest issue is effort. Keybase isn’t just a half-dozen API endpoints: there’s server infrastructure in the form of traditional API endpoints, real-time streaming stuff, and an encrypted filesystem. Moreover there’s a presumption that users are all connected, so it’s hard to imagine how the client would work where you and I can talk on it, but my data is on my servers and your data is on your servers. It would take multiple person-years of effort to have some awesome thing working like that. And Keybase wouldn’t be where it is right now if we focused on this, and realistically….something like 1-in-1000 Keybase users ask for this. As an alternative, consider the fact that Keybase lets you speak easily and securely with someone else to secure other modes of comm. Want to use IPFS + some other encryption software? Exchange your keys on Keybase and don’t look back. We got you started safely! A Keybase integration makes this possible, and we’re happy to have helped. Want to use Signal? Share your phone number and compare your security codes on Keybase. Want to use Tarsnap for backups? Keep your key in KBFS. You can bootstrap using all kinds of other software using Keybase; we make totally decentralized software better. And then don’t actually use Keybase’s chat or filesystem for anything else. I’d propose this is the better answer than a Keybase that can understand different servers.

                                                                                          • there’s nothing I can do to address the “what’s stopping you from eventually releasing a bad client” angle… I don’t want this to happen. My answer will continue to be “the client wouldn’t be this good if we weren’t a company” even if it appears that by being a company we’re more likely to have a bad client.

                                                                                          hope I haven’t shot myself in the foot with admitting some of the difficulties here. Again, (1) this integration would just be used by the people who want it, and (2) lobste.rs could remove it whenever they decide they dislike it.

                                                                                          Thanks for the q!

                                                                                        2. 1

                                                                                          I do appreciate your engaging. I agree with all your points. On point (4), I don’t think any ill will is necessary to wind up with bad outcomes. The incentives of for-profit corporations are such that, when the service they’re providing is essentially for the public benefit, everyone should give careful thought to how the company’s needs and the public’s needs might diverge over time.

                                                                                          Case in point: I’m sure that my employer was sincere about “don’t be evil” when it was first raised as an informal motto - at a time when the company was much closer in size to the size Keybase is now. I’m equally sure that nobody at the top feels that they have changed direction or betrayed their ideals, even with all the controversies the company has been through in the past two years.

                                                                                          With all that said, as I remarked elsewhere in this thread, I rely on Keybase day-to-day and am in favor of the integration.

                                                                                          1. 1

                                                                                            Would you be willing to spend the necessary implementation work so that Keybase doesn’t compete with OpenPGP public key signatures / web of trust / keyservers, but instead cooperates with it? Specifically if the user has a GPG key by supporting in Keybase to:

                                                                                            A) Make the signatures used in account proofs so that they can be verified with a GPG public key.

                                                                                            B) Export/sync account claims/associations with GPG public key identities.

                                                                                            C) Allow a user to automatically sign their GPG keys when they follow another user. (Use the appropriate format to indicate that the fingerprint was not received over a secure channel and the identity of the human wasn’t checked. Only sign the identities in the key that were verified. Optionally: If the user states that they received a fingerprint e.g. in person and verified the identity of the human, by e.g. pasting a fingerprint of another user, indicate that instead.)