Threads for pointlessone

  1. 4

    I must be too jaded hearing how Google is concerned about privacy. It’s obviously true that OCSP breach can expose browsing history. But so can Google breach given that Chrome sends virtually everything to the mothership.

    I’d rather the article didn’t highlight the privacy aspect that much.

    An easy fix to OCSP privacy concerns would be to proxy those requests through a central proxy. OCSP breach would see that Google requested cert checks a bunch of times and no personal data would be exposed.

    One thing I don’t see discussed is how A in CA is getting eroded. CAs are invited to participate in submission of revoked certificates to CRLSets. CRLSets are basically embedded databases of revoked certificates. To update this db user needs to update Chrome and restart their browser. At the same time:

    The Chromium source code that implements CRLSets is, of course, public. But the process by which they are generated is not.

    Which means that the ultimate authority to revoke a certificate is now at Google. They decide whether to include a revoked certificate. Or even, whether to include a live certificate.

    1. 3

      You /can/ pay an additional carrier in the form of a VPN but the vast majority are sketchy and very few have an actual multi party setup that provides the privacy that they all claim - but also OCSP is pretty useless as it closes open so if someone is in a position that results in you seeing a revoked cert they can also trivially defeat OCSP (basically CAs are terrible and cannot provide sufficient uptime for their OCSP responders to not break a depressing amount of the internet if you require a valid response). Then there’s the additional significant pageload impact if you require an OCSP response for your TLS connections, and finally there’s the significant privacy violation (which is less relevant if you’ve hopped on board the chrome “I want Google to know everything about me” wagon).

      As far as EV certs specifically go: They were an invention by CAs because they were able to see the development of LetsEncrypt and realized that would meant they lost their SSL cert printing press, especially as LetsEncrypt has actual secure DV work rather than the commercial CAs that repeatedly miss issued certs. EV certs were marketed with a not insignificant part of the “security” being that they would cost 10k so it wasn’t possible for criminal orgs to get one. However once they managed to force browsers to include the stupid green bar/box UI they realised that they could make more money by charging less but selling more EV certs, which negated that argument. They then were also repeatedly found to misissue EV certs as well, despite the “extended” validation. The biggest improvement in recent years came from the non-CA parts of the PKI community: the introduction of certificate transparency. This is more effective at catching misissued certificates, and also allows the browser vendors to identify CAs that are not trustworthy, and now provide objective data justifying removal of trust.

      But even if you assume the CA companies that offered EV were actually competent, the entire concept of the EV UI is flawed. The first problem is that EV green is not always present, even if a company forked over the money for the green text it’s trivially easy for them to include non-EV content that should result in the green text disappearing, but even if that doesn’t, plenty of these companies shard at the dns level and don’t buy separate certs for each so you quickly got redirected a non-green bar again. This all leads to the next issue: because the appearance of the green bar was - from the PoV of a user - essentially random and so the absence of the green bar conveys no information. The final problem means that users can actually end up not trusting the EV green text: The name in the EV text is the legal identity, not the marketing identity, and for many companies those are not the same.

      EV certs were never a benefit to the user, their primary reason to exist was CAs trying to maintain their their low cost + high revenue business model.

      The alternatives to OCSP are better than actual OCSP in every way: they are better for pageload, they don’t depend on CA infrastructure liveness, and they don’t broadcast what sites your looking at to a group of companies with a track record of scummy behavior.

      1. 2

        An easy fix to OCSP privacy concerns would be to proxy those requests through a central proxy. OCSP breach would see that Google requested cert checks a bunch of times and no personal data would be exposed.

        Then you are giving all of that data to the proxy operator,

        1. 2

          Google already has all your data. It wouldn’t get anything new by implementing this proxy.

          1. 1

            No it doesn’t. Just because you have chosen to provide an advertising company with access to everything you do doesn’t mean everyone else has.

            1. 1

              That’s not necessarily true. If you don’t have Chrome configured to sync history (or have Chrome configured to sync history encrypted) Google won’t get your full browsing history (though they’ll get a large fraction of it via Adwords and Google Analytics).

        1. 2

          Tests should be run in random order by default.

          Random and parallel by default!

          1. 3

            Parallel, yes, random, not so certain.

            For me, the issue is reproducibility. One thing that has been the bane of my life are flaky tests (for whatever reason). So something that deliberately adds to this flakiness doesn’t help.

            Whenever I do tests which include randomness (for instance Property based tests https://scalacheck.org/) I always end up specifying the seed so I can reproduce any issues.

            1. 12

              RSpec (and I suspec most test frameworks) lets you specify seed for a run. When not specified it will pick one at random. So by default you have your tests ran in random order but when you have a flaky test you can specify seed and run tests in that particular order to debug it. Best of both worlds.

              1. 7

                If you’re able to reproduce test failures (by failures returning the seed and runs able to take the seed as a parameter), and if you treat failures as actual bugs to fix (whether that be a bug in the application or a bug in the tests), then I don’t have a problem with randomised tests that pass sometimes and fail other times.

                After all, the point of a testsuite is to find bugs, not to be deterministic. So if randomness is an effective way to find bugs, then I’m all for it!

                1. 1

                  After all, the point of a testsuite is to find bugs, not to be deterministic. So if randomness is an effective way to find bugs, then I’m all for it!

                  The issue I would bring up in the case of random test order is that while it does find bugs, the bugs it finds are often not in the application code under test – instead it tends to turn up bugs in the tests. And gets from there into the debate about cost/benefit tradeoffs. If putting in the time and effort (which aren’t free) to do testing “the right way” offers only small marginal gains in actual application code quality over doing a bare-minimum testing setup – and often, the further you go into “the right way” the more you encounter diminishing returns on code quality – then should you be putting in the time and effort to do it “the right way”? Or do you get a better overall return from a simpler testing setup and spending that time/effort elsewhere?

                  1. 1

                    Tests are code. Code can have bugs. Therefore, tests can have bugs.

                    But as with any bug, it’s hard to say in general case what consequences are. Maybe it’s just a flaky test. Maybe it’s a bug that masks a bug in the application code.

                    You’re also right that software quality is on a spectrum. A one-off script in bash probably doesn’t need any tests. And formal correctness proof is very time/money-expensive. Another blog engine probably doesn’t need that level of correctness.

                    Of all the things one can do to improve software quality (including test code quality) running tests in random order is not that expensive.

                2. 2

                  For me, the issue is reproducibility. One thing that has been the bane of my life are flaky tests (for whatever reason). So something that deliberately adds to this flakiness doesn’t help.

                  Admittedly a truism, but: If the “test” is flaky, then it is not a test.

                  Whenever I do tests which include randomness (for instance Property based tests https://scalacheck.org/) I always end up specifying the seed so I can reproduce any issues.

                  Exactly. This is very good practice.

                  I have taken it a step further: When I build randomized tests, I have the failing test print out the source code of the new test that reproduces the problem.

                  1. 2

                    Catch2 always prints its seed, so you can let it generate a random seed for your normal test runs, but if a test fails in CI or something, you can always look back at the log to see the seed and use that for debugging.

                    If you can’t reproduce a failing test because of randomness, that’s a failure of the test runner to give you the information you need.

                    1. 1

                      Have you tried rr? Especially with its chaos mode enabled it’s really helpful for fixing nondeterministic faults (aka bugs).

                      1. 1

                        Do you mean this url?

                        1. 1

                          I do! (oops)

                  1. 5

                    I see the thing you are calling “permacomputing” as a grassroots sustainability effort within the larger computing world. And IMO its more about networks and connections between people than it is about computing itself. Instead of tech being sustained by a profit motive to “Build the next products to out-compete the market & get rich at any cost”,

                    Permacomputing is sustained by

                    “ we like being able to have our OWN computer, hack it ourselves, and we like the original liberatory promise of the internet. We want to keep that flame alive no matter what challenges we may face now and in the future “

                    • Punk / DIY ethic. Cares about data custody. Anti-surveillance activist. Won’t shut up about “panopticons”
                    • Care more about network effects and usability/accessibility than about software licensing
                    • Seeks liberation of the networks and connections between people more than liberation of all source code.
                    • Care more about “runs on my 10 year old potato PC” than “cutting-edge features” or “scales to billions”
                    • Reject tech elitism and embrace tech inclusivity: Work hard on blurring the “power user” line and challenge the status quo that “good usability / UX isn’t required for ‘under the hood’ utilities and tools “

                    @j3s I agree that the last sections feel like a bit of a cop-out. Ultimately I think this article is more of a goodbye to the FSF ideology than it is a fleshed-out introduction to this new ideology which you perceive as being born right now.

                    You can write more about it later :) It’s probably too much to fit into one easily-consumable post.

                    1. 3

                      This all sounds nice and cosy but, I’m afraid, it can’t be a solution.

                      It seems like it’s nostalgia talking. Remember back in the days one person could write an OS, a disk driver, their own programming language and a compiler for it, and a word processor written in that pl?

                      We’re way past that. Our OSes clock in millions of lines of code (nearly 43M lines for linux kernel alone). Our programming languages are not much smaller (GCC is 15M+, LLVM is 14M+). Our word processors are huge (LibreOffice is 12M+ lines). Our browsers are vast (Firefox is 32M+ lines, WebKit is 26+M, Chromium is 34M+). Our desktop environments are immense (GNOME is 24M+ lines, KDE is 27M+). And that is far from exhausting list of software one might want to use.

                      This is an amazing amount of effort and we still don’t get to enjoy backwards compatibility of Windows, UX polish of macOS, accessibility of either of those, power of commercial graphics and video editing software (Final Cut? Photoshop?), or anything ever remotely resembling specialised industrial software (CAD, simulation).

                      I get a hobby vibe from it. Everyone is chill and doing their little thing to enjoy what they’re doing. Same as recreational gardening, basically. You have a little garden in your back yard. You have a community of fellow gardeners to talk to and share your seeds/produce with. And you can enjoy your harvest from the two tomato plants you have but you will have to pop into your local supermarket in the off-season for more tomatoes and rely on the industrial agriculture most of the year.

                      I’m not going into hardware because it’s even further from what can be done on the personal level. It’s virtually impossible to build, say, a modern phone without big industrial backing.

                      1. 1

                        as always, you said it best. i think i’m just going to chop the permacomputing section out & leave it as a goodbye to FSF ideology - that really is what inspired the article anyway.

                        edit: i redid the outro to better line up with what i was trying to express in the first place.

                      1. 10

                        I’m a little conflicted about this piece.

                        • First of all, it’s very hard to read. A+ for “aesthetics” but otherwise it’s a terrible presentation. A little bit of variation and structure would make it much easier to digest. Right now it’s stream-of-consciousness type wall of text. I suspect, lack of capitalisation is a major contributor to that.
                        • I don’t think “permacomputing” concept is explained well in the article. It’s obvious it’s somehow counter to free software but I don’t understand how. Descriptors are vague. The ones related to free software I kinda get but that’s just because I’m familiar with the concept. I wouldn’t get it if I knew nothing about free software. That’s exactly what happens to the permacomputing bits. I don’t know what it is and I can’t see how proposed characteristics define it.
                        • I found permacomputing.net. I assume it’s about the same concept. The source doesn’t directly link to it (not that it properly links to anything at all). It read like some sort of spiritual BS. Which roughly matches the OP vibe.
                        • Free software is very specific about a few things and completely ignores everything else. It makes it easy to apply and identify. It’s easy to say whether a thing is a free software, is not, or is out scope of free software concerns.
                        • Permacomputing (even more so than Permaculture) is extremely vague. To the point that the only possible answer to “is this it?” is “maybe” or rather “yes” because there’s no point where anything can be outside of its scope or not follow its definitions. At best a thing can be at the farther reaches of the spectrum of its values. This makes the concept aspirational in nature and practically useless.
                        • The aforementioned vagueness probably explains why OP doesn’t make a good job explaining it.
                        1. 9

                          I initially had the same problem, the lack of capitalisation threw me off. But, the more I read, the more I viewed this piece as art and not a (technical) article. The repetition of words, the formatting, the personal feelings and character it emits made me think of it as art and I loved it! Prose poetry seems to be what some people would call it.

                          The latest edit with the refined second part indeed makes it look sharper and as others said, ‘drones run linux’ should be on a t-shirt.

                          1. 6

                            From what I’ve seen and read permacomputing is anti-capitalist computing, but calling it that would make it political and not simply a self-described lifestyle label.

                            1. 3

                              Anti-capitalist as in “reuse, don’t buy” (frugality) or as in “here’s how to make a CPU in your backyard with only thing you can find in the nearby forest” (DIY, manufacturing)?

                              1. 2

                                You seem to be mistaking capitalism for industrialisation.

                                1. 1

                                  Fair point. But doesn’t make it any clearer what and how the movement is trying to achieve.

                              2. 3

                                i think this is true to an extent. :3 i wouldn’t say its explicitly anticapitalist, but a lot of its values are not very compatible with capitalism.

                              3. 5

                                ty for the feedback, I agree with everything you’re saying about permacomputing - the final 2 sections felt pretty rocky to me after I wrote them, and I contemplated cutting the article at “drones run linux” to make it feel sharper.

                                1. 3

                                  TBH, “drones run linux” is hella sharp. I want to put it on a tshirt. =D

                                  1. 1

                                    same tbh, thank you! fyi i rewrote the outro to be less wishy-washy :3

                              1. 1

                                I’m not quite sure what problem it’s trying to solve.

                                1. 2

                                  Hi, I see it more as improving the way things work (aka progress). To answer your question:

                                  What Qworum is, in a gist

                                  Qworum intends to improve developer productivity by defining a module system for web applications. JavaScript already has NPM, which saves developers countless hours because devs can simply use the JS modules that others have made available. And that’s exactly what Qworum does, but for web applications.

                                  Example

                                  Imagine an e-commerce webapp that needs a shopping cart. The developers start implementing the cart until one of them says: “Hey, did you see that shopping cart Qworum service on https://my-shopping-cart.example ? Let’s use that, and we don’t even need to copy any code over to our site, a remote call is enough !”

                                  Repeat this scenario for a contacts list, a todo list etc and see how much your productivity is improved by using not only remote Qworum services but also local ones.

                                  REST-Qworum comparison

                                  Perhaps comparing Qworum with REST can help understand how Qworum works.

                                  A Qworum service is comparable to a REST API in that it has a set of end-points. A Qworum end-point in turn is similar to a REST end-point in that it has a URL and it receives (optional) data and returns some data.

                                  But the big difference compared to REST is that Qworum end-points are not obliged to return a result immediately. In other words, a Qworum end-point call can involve more than one HTTP(S) request-response pair. So Qworum can return 2 types of responses during a call:

                                  • HTML pages for interacting with the end-user,
                                  • Qworum scripts for making nested calls to other end-points (recursion is also supported), and for returning a result to end the call. Qworum scripts can be sent as XML to the browser, or they can be generated in web pages on the browser using JavaScript.

                                  Is this response working ok for you?

                                  1. 2

                                    I still struggle to see the value proposition.

                                    Why can’t the server do all the intermediate requests server-side? What’s the benefit of doing it in the user’s browser?

                                    For example, Stripe provides UI libraries/components for making payments. They also provide an API. It also can talk to other APIs to do its thing (e.g. banks to make payouts, or IRS or whatever to manage taxes).

                                    So all of this is already possible and done. What benefits does Qworum provide?

                                1. 5

                                  I’m so tired of this bitching. Why should everybody get rich with software except the people writing it?

                                  1. 3

                                    There is truth in that. What is also true is that the authors chose the original license. They picked it and should have known what it allowed and forbade.

                                    People seem to still think “oh, they will all be nice people and send me money, because I spent so much time on it.” Nobody does that. Nobody cares about a software that works and they got for free. It is not on peoples minds.

                                    1. 0

                                      Smells a little like victim blaming.

                                      Lightbend did choose a FOSS license. It didn’t work for them so they chose another license. They put forth a reasonable argument for it even though they didn’t have to.

                                      OP, however, took offence to that: “how dare you put on my mind thinking about sending money to you?!”.

                                      You make it sound so innocent. It just is not on peoples minds. Well, maybe it should be?

                                      You’re not saying it but it seem like you’re implying that picking a non-FOSS license would’ve been better. And that might be true. This is only the most resent example of switching from a FOSS license calling forth undue toxicity form some people. Maybe it is prudent to start with a non-FOSS license to better manage expectations?

                                      It seems like Source Available license with a generous free usage would signal “I believe it’s valuable I just don’t want to deal with payments at this time” at the beginning of the project. Later it has a clear path to sustainable business around that value. And if it didn’t pan out it can switch to a FOSS license to signal “AS IS. Don’t bother me”. It might be a solution because the other way around ruffles too many feathers, apparently.

                                      1. 2

                                        OP, however, took offence to that: “how dare you put on my mind thinking about sending money to you?!”.

                                        I am not commenting on OP, just on the licenses.

                                        You make it sound so innocent. It just is not on peoples minds. Well, maybe it should be?

                                        It really is innocent I think. Annoying, yet understandable. People are busy esp. at work. I think the majority here uses tons of tools and libraries free of charge w/o ever even thinking about it. Have you ever given all the projects you rely upon the money the “deserve”? I sometimes donate to projects, but not really the amount of value I extract from their work.

                                        You’re not saying it but it seem like you’re implying that picking a non-FOSS license would’ve been better.

                                        I am not really saying that. I once worked for a company that made an OSS framework. We had tons of succesful users, but since it mostly just worked, nobody had it on their mind to give us a slice of the pie. Now with a different license, that may have been possible, but that license would have hindered adoption in the first place. Permissive licences give you adoption/users, but they do not convert to paying customers. It’s complicated.

                                        1. 2

                                          Lightbend did choose a FOSS license. It didn’t work for them so they chose another license.

                                          Yes, after years of community- and reputation-building on the basis of being an open source software provider. People trust the reputation of open source software because they know they can get take the source code and pay someone else to maintain it for them if the original vendor doesn’t do it to their liking. It destroys the monopoly of vendors over software and fosters competition.

                                          Perhaps Akka should have chosen a strong copyleft license from the very beginning since Apache ended up so unsuited for them?

                                          1. 2

                                            People trust the reputation of open source software because they know they can get take the source code and pay someone else to maintain it for them if the original vendor doesn’t do it to their liking. It destroys the monopoly of vendors over software and fosters competition.

                                            This is a hypothetical. Forks do happen but “pay” is rarely involved. What happens more often is that payments go to service providers who explicitly do not maintain the source code. This is the core of the issue in this case and in many similar.

                                            Perhaps Akka should have chosen a strong copyleft license from the very beginning since Apache ended up so unsuited for them?

                                            Hindsight 20/20, eh? Things change. Perhaps earlier Apache or whatever FOSS license they used was perfectly aligned with their goals. Now it doesn’t.

                                            In the same vein we can say that perhaps users should have supported Lightbend financially more from the very beginning since they don’t like proprietary licenses so much?

                                            But they didn’t. And that’s OK. What is not OK is being cross about it now and being angry at Lightbend but not themselves. This is not the first time this happens. The cause is clear. The problem is well known. There’s an obvious solution, too: pay the maintainers.

                                            The correct way to react to this is admit you were wrong. “Well, it was great while it lasted. We had all the Freedom in the world and we chose to exploit the maintainers beyond the point they could bear it. This is not the first time it happens so I should probably learn the lesson and start paying to keep the Freedom.” Unfortunately, it doesn’t seem to go that way. Instead we get “My Freedoms are getting violated again! I can’t exploit any more so I will make a lot of fuss about it and make it look like I don’t use the software in protest. Anyway, who’s next?”.

                                            1. 1

                                              What is not OK is being cross about it now and being angry at Lightbend but not themselves…The correct way to react to this is admit you were wrong.

                                              By ‘you’ do you mean me, the guy who suggested using a strong copyleft license from the beginning so that Lightbend wouldn’t have found themselves in the situation in the first place of not being able to extract value out of their work? To me a perfectly reasonable path would have been to start with the GNU GPL v3 and then later switch to AGPL once they started offering value-added services. It’s the switching to a non-Free license that I take issue with.

                                              1. 1

                                                By “you” I meant OP, or whoever might feel angry they can’t use Akka now because they have to pay for it now.

                                                As for AGPL, I don’t think it’s a solution. It doesn’t prevent cloud providers from hosting a product or charge for it. It only prevents them from having a competitive advantage in a form of code changes. The best we can get out of it is those cloud providers contributing their changes. I don’t believe many of them actually have them.

                                                AGPL doesn’t force those cloud providers to pay for the software. So they still can provide their services without spending on the development. And the developers still incur all the expenses.

                                                At the same time the narrative goes that the developers still can charge for their services of hosting the very same software they develop. Except, they have to branch out into services, ops, reliability, support, and many other things that are not the actual development.

                                                It’s been a long time since it became obvious that FOSS is not compatible with our reality. Be it capitalist economy, human nature, moral failing, or whatever. Every time it takes impressive mental gymnastics to show that it doesn’t exploit developers. People keep pointing at Linux and Red Hat to try and convince that it’s possible to not starve working on FOSS. At the same time if OpenSSL vanished without a trace more than one country would fall in economic collapse. No one seem to notice that until just a few years ago it was maintained by like two dudes making $30k/y. Heartbleed changed trajectory for OpenSSL but not for FOSS in general. A project has to be very popular or become “critical infrastructure” to attract any money in FOSS.

                                                1. 2

                                                  Look, it’s clear you don’t understand what Akka is (or you’re being deliberately obtuse to win an argument), otherwise you would never claim that cloud providers could ‘host’ Akka and charge for it. I think it’s time for you to call it a day.

                                    1. 4

                                      With a hat tip to @algernon’s sentiment, which closely aligns with my own first reaction…

                                      Have they (or others who have undertaken similar license changes) articulated why they prefer this route over releasing under the AGPL? I’ve started to prefer the AGPL for my web things and feel like it addresses most if not all of the concerns the BSL does, while being more comfortable (IMO) for potential contributors.

                                      1. 1

                                        I would figure a couple things, primarily and secondarily etc: risk of getting other unrelated tools accidentally GPL-ified like it was some kind of contagious sludge; a percieved inability to sell privately modifiable implementations to customers; communism; confusing language around patents in v3; etc.

                                        1. 1

                                          AGPL doesn’t solve the financial part.

                                          AGPL requires release of modification for hosted software but it does not impose any restrictions on who’s getting paid. So the main issue of cloud providers hosting AGPL software, capturing fee and not paying to the developer/maintainer remain unresolved.

                                          BSL specifically addresses that part.

                                      1. 2

                                        I don’t know what Akka is. I don’t think it matters for the discussion of licensing.

                                        Overall this is a very bad take. It has no moral core, nor legal basis, not even a good drama.

                                        It restricts free usage to non-commercial purposes only. […] This violates Open Source’s rule 6 […] Or Free Software’s freedom zero

                                        No, it does not. It uses different license. They’re not trying to present it as a FOSS license.

                                        Akka, like other products that pulled a bait-and-switch, is popular because it was marketed as being Open Source / Free Software

                                        I can see how some people can see it that way. We’ll get back to this shortly.

                                        as developers such as myself would never touch proprietary libraries with a ten-foot pole. Such libraries for me simply don’t exist

                                        It’s a great shame, dear OP, you feel that way. There’s a lot of great proprietary software out there. But this is probably irrelevant to the discussion.

                                        I need control of whatever runs in my program, I need the ability to fix it myself, or to have other people fix it for me, people that may not be affiliated with the maker of those tools;

                                        Lightbend actually address this:

                                        Can Akka community members continue to contribute to the project?

                                        Yes. This is a source available license that allows and encourages community involvement.

                                        See? This is not an issue in any practical sense. You have a bug fix? Nice! You can contribute it back and let Lightbend maintain your code.

                                        Software licenses are expensive, add up, and even in big companies that can afford it, going through the endless bureaucracy of having such expenses approved is freaking painful, which is why FOSS may be even more popular in corporations than it is in startups;

                                        I don’t want to assume, but it starts looking like OP is being obtuse on purpose here. Of corse, corporations would love free stuff they don’t have to deal with. Of course, licenses can be expensive. Of course, approving expenses can be painful. But you know what? A corporation with “US $25m per annum” revenue probably can afford both the bureaucracy and expense for the business-critical software.

                                        The new terms and pricing is one the more approachable in the industry.

                                        Selling support or extra tooling in FOSS sometimes works, because it’s complementary — employees can introduce a FOSS library or tool, without any kind of expense approval from upper management, and then the contract for extra stuff can come later, after it has proven its value.

                                        Well, this is exactly the proposition from Lightbend. You start completely free, then buy license with support and extra stuff.

                                        I assume, there was some sort of paid support available before but for some reason there wasn’t much demand.

                                        It’s morally wrong to make the product popular, by advertising it as Open Source / Free Software, and then doing a reversal later. Don’t get me wrong, I am sympathetic to the issue that Open Source contributors aren’t getting paid. But in the Java community nobody wants to pay licenses for libraries. If that model ever worked, it was in other ecosystems, such as that of .NET, and that model has been dying there as well. Turns out, trying to monetize software libraries (or other FOSS products) is a losing proposition.

                                        Oh, here it goes. It’s whole lot to unpack.

                                        Is it morally wrong to use someone’s work without paying them? Is it morally wrong to expect a proper level of support in a big community from a relatively small company without paying for it? Let’s not go there.

                                        Let’s instead focus on “Turns out, trying to monetize software libraries (or other FOSS products) is a losing proposition.” Why is that? How can one wave around Freedoms and appeal to morality and completely miss how they’re rallied up about someone wanting to collect benefits of their work? How OP can be angry about not being able to have this very good thing someone else built for free forever? If this thing is so good why wouldn’t OP want to pay for it?

                                        I don’t have true answers as I’m not an OP but I will hazard a guess that it’s just a casual case of hypocrisy.

                                        Akka is a great project, and has been one of the major reasons for why people chose Scala, or even Java. We’ve been using it at work, with great success.

                                        OP completely agrees that Akka is good and valuable and everyone is happy it exists.

                                        However, going forward I can no longer recommend any kind of investment in it.

                                        But the moment the author made any effort to capture the value they created OP noped out right that moment.

                                        I understand and have known the struggle that FOSS developers and companies go through. FOSS is just not a good business model.

                                        I just can’t understand how OP can expect software to be built? Who’s going to pay for it? FOSS is already made way under market rate. Does OP expect authors to starve? Do authors have to work a day job to earn money and then spend extra hours on FOSS? What would OP suggest?

                                        Consider that before this license change I have recommended Akka, and I may have contributed to Akka in my free time, if I ever found the need for it. But after this change, I can no longer do so without getting paid 🤷‍♂️

                                        At this point it ventures into satire Inception-land. OP never paid a cent to the authors, never even contributed to the project, yet is very upset that some big corporations might start paying for software that makes them money.

                                        I do want to thank Lightbend, from the bottom of my heart ❤️, for all they have contributed. I always loved their work.

                                        Just not enough to start paying for it.


                                        I said it before, I’ll say it again. FOSS did its thing. It’s time to move on to more sustainable models.

                                        It still has its place. A weekend project is perfectly fine to be released under MIT or GPL. But any sort of moderately (and up) popular project requires just so much effort that it can only be sustained by corporate sponsorship.

                                        And even then it might be a corporation capturing free labor from community rather than a proper community effort of classic FOSS.

                                        1. 3

                                          A weekend project is perfectly fine to be released under MIT or GPL. But any sort of moderately (and up) popular project requires just so much effort that it can only be sustained by corporate sponsorship.

                                          Agreed on the need for sponsorship - but, why is that incompatible with MIT or GPL?

                                          1. 1

                                            It is compatible but it is not a business model. It’s hard to forecast sales. It is much harder to forecast sponsorships.

                                            Sponsorships do not scale well, either. It’s quite straight-forwards to sponsor an individual or a few. But assumption is that they’re doing everything a project needs: development, technical writing, support, community management, infrastructure, ops, security. In principle it’s possible to sponsor individuals that will cover broad spectrum of project needs but I have not seen it in practice often (ever?).

                                            Sponsorships are hard from the incentives point of view, too. Where do sponsored people loyalties lay? Is it community? Is it their sponsor? What to do when there’s a conflict of interest (e.g. tension between the community desires and sponsor’s request)?

                                          2. 2

                                            I don’t have much to say about your whole post, but “FOSS did its thing. It’s time to move on to more sustainable models” is a strange thing to say. The GPL has been sustained for over thirty years. Only profiteers are looking to move away from it to licenses that restrict the user. If you think software is primarily about profit, then maybe the GPL isn’t sustainable (though companies like WordPress and OpenSIPS make it work). If you think software is about fun, or tools to make our lives easier or better, or even just fairness - there is always incentive to release your software as FOSS.

                                            1. 1

                                              I concede my wording was unclear.

                                              Only profiteers are looking to move away from it to licenses that restrict the user.

                                              I don’t believe that’s true. I also don’t think you’re convinced it’s true either. There are many projects out there with very small teams and disproportionally big user bases. Those user bases demand support. Not only in the sense that they have some problems with the projects they want to be resolved but even things like contributions require attention of the teams. The asymmetry completely clogs the teams bandwidth. A solution to that is to dedicate more time to the project. That time might come either from the teams “free” time: curving time from their sleep, their time with family or friends, their recreation, etc. Or their work time: cutting into their profits. Either situation is unsustainable. An obvious solution is to pay the team so that they could spend their work time on the project. But this happens very rarely in FOSS for whatever reason.

                                              If you think software is about fun, or tools to make our lives easier or better, or even just fairness - there is always incentive to release your software as FOSS.

                                              Well, I wholeheartedly agree. The thing is, supporting a FOSS project is less fun than it seems. Writing docs is less fun than writing code. Dealing with inconsiderate users at times very not fun. An accidental breaking change in a new release resulting in a torrent of complaints is not what brightens an average maintainer’s mood.

                                              You see, everyone is happy to take the “free” part of FOSS but also most act as if “AS IS” part is not there. Fun, altruism, and ideology (fairness being in this category) are all good but they are very small parts of a project maintenance. What’s worse these parts, being internal, at best remain constant, and usually diminish with time. The other parts though, more or less scale with the user base.

                                              This is the issue. That point where fun becomes not enough people start looking for a solution. Some do not take any action and burn out. Some, distance from the project. Others are looking for a way to make it sustainable. For example, Lighbend were providing paid support. Turns out their product was so good that the profit was not at the sustenance level.

                                              We (users) are burning out brilliant people who come up with good ideas and are willing to spend their time implementing them without hesitation while becoming all indignant at any hint of supporting them. But no one will step in to do their work.

                                              There a calls to fork Akka, for example. Maybe there will be a few forks. Though, I’m 99% confident, not a single one of them will last longer than 6 months.

                                              FOSS has it’s role but it also has a glaring problem. People know about it. And yet people rally against any effort to fix it. They don’t just point out issues in proposed solutions, or suggest alternative solutions. They say “you can’t fix it, do not even try”.

                                          1. 1

                                            I use a different tool. Also written in Go. It suffers from bad memory usage and an occasional OOM. A quick glance at restic’s issues reveals that it suffers from the same issue.

                                            Do you have any recommendations for a backup tool that supports deduplication and provides solid encryption? Preferably with low disk space requirements for index on the source machine.

                                            1. 4

                                              Borg/borgmatic (cw: github) satisfies those requirements for me.

                                              1. 2

                                                Not sure if this is the other tool you were referring to, but I’ve been a Kopia user for the better part of a year and it’s worked great for me so far!

                                                https://github.com/kopia/kopia

                                                Supports deduplication and encryption out of the box, along with sinks to popular cloud providers (I use Backblaze B2).

                                                1. 1

                                                  Yes, I use kopia. As I said, it does OOM for me once in a while. It also uses 10GB of cache while backing up about 50GB of files. Seems a bit excessive.

                                                  1. 2

                                                    I’ve personally never had an OOM with a ~600GB backup working set.

                                                    That said, I have 64GB RAM on both my desktop and laptop (each of which is backed up using Kopia).

                                              1. 10
                                                1. I’m sorry you’re using Ubuntu :p
                                                  • There’s a slight chance a reader (of the title) might assume there’s something wrong with ssh signing but OP actually can’t use it for a reason unrelated to git or ssh signing itself.
                                                2. There’s another way to switch back to default: delete the config value. Use --unset key for that.
                                                1. 3

                                                  I think that the problem is using Ubuntu LTS that will be soon deprecated (04.2023, only 8 months left) on desktop instead of upgrading to one of the newer versions.

                                                  1. 2

                                                    I had a similar issue when switching to SSH signing. My primary WSL2 instance is Ubuntu-20.04; git is “stuck” on an old version that doesn’t support ssh singing. So now I’m in the process of copying over my stuff to a new Ubuntu 22.04 WSL instance :))

                                                  1. 9

                                                    I had been drooling over this type of devices for years, with the dream of being able to do quick work anywhere, be it on the train, the couch, etc. Always have a tiny laptop handy that can do everything a normal laptop can.

                                                    Then I got a Pinephone with the keyboard case, which promised to be exactly that. But I did not use it for long. Aside from the fact that it was massively underpowered, the form factor was not good either. It is simply not good for your posture to use this for an extended period of time. I got a stiff neck and shoulders pretty quickly, accompanied with RSI-like symptoms in my hands and fingers.

                                                    In hindsight this is obvious, of course. That’s what you get with such a small keyboard and screen. But if I am only going to use it for simple commands and notes then… I can better just use my Android phone with termux that I already am carrying around. After that experience tiny laptops completely lost their appeal for me.

                                                    1. 1

                                                      Being underpowered is the main concern for me. Otherwise, I don’t even consider this as a contender for the main work device. I see it as a crossover of a novelty toy and and an emergency response device. For that it needs to be small, light, and not freeze when there’s an ssh session open along a few tabs in a browser.

                                                      I was tempted by a few devices before but reviews always shown that they were slow.

                                                      I don’t hold my breath this will tick both boxes. But it might be ticking the novelty box really hard so I’m still kinda excited.

                                                      1. 1

                                                        But I did not use it for long. Aside from the fact that is was massively underpowered, the form factor was not good either. It is simply not good for your posture to use this for an extended period of time.

                                                        You might like this fellow crustacean’s setup: https://www.reddit.com/r/ErgoMobileComputers/comments/s6kgv2/split_keyboard_raised_iphone_writing_setup/

                                                      1. 20

                                                        TLS probably has a lot of impact on this. First, TLS handshake requires at least two exchanges that need to be ACKed before sending the actual HTTP response. In that time TCP might figure out the bandwidth. Or if not, spent most of those initial 10 packets leaving almost nothing for HTTP.

                                                        1. 7

                                                          It does seem really weird that this is only barely mentioned in the article when it’s overwhelmingly likely to dominate any considerations having to do with slow-start.

                                                          1. 6

                                                            Exactly this. Terminating TLS has a real cost and eats up the budget that you’re talking about. I can’t stand this sort of posturing bull that ignores reality. The original author should go funroll some loops.

                                                          1. 8

                                                            Computationally homogeneous. A typical neural network is, to the first order, made up of a sandwich of only two operations: matrix multiplication and thresholding at zero (ReLU). Compare that with the instruction set of classical software, which is significantly more heterogenous and complex. Because you only have to provide Software 1.0 implementation for a small number of the core computational primitives (e.g. matrix multiply), it is much easier to make various correctness/performance guarantees.

                                                            Well actually classical software can be made up of only one instruction (NAND) so it’s twice as good as neural networks

                                                            The 2.0 stack also has some of its own disadvantages. At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how. Across many applications areas, we’ll be left with a choice of using a 90% accurate model we understand, or 99% accurate model we don’t.

                                                            The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”, e.g., by silently adopting biases in their training data, which are very difficult to properly analyze and examine when their sizes are easily in the millions in most cases.

                                                            This seems like the crux of it, though? If we don’t understand how it works and it can fail in unintuitive and embarrassing ways, how can we actually trust it?

                                                            1. 3

                                                              This seems like the crux of it, though? If we don’t understand how it works and it can fail in unintuitive and embarrassing ways, how can we actually trust it?

                                                              ML is generally good for problems where either:

                                                              • You don’t actually understand the problem,
                                                              • There might not be a correct answer, but a mostly-correct answer is useful, or
                                                              • The problem changes frequently.

                                                              Shape detection is a good example of the first. Plato onwards have tried to define a set of rules that let you look at an object and say ‘this is a chair’. If you could define such a set of rules, then you could probably build a rule-based system that’s better than example-based systems but in the absence of such a set of rules the example-based approach is doing pretty well.

                                                              The middle category covers a lot of optimisation problems. Even where there is a correct (optimal) answer for these, the search space is so large that it’s not in a complexity class that makes it even remotely feasible. Example-based solutions over a large set of examples let you half-arse this and get something that is a lot better than nothing and a lot less computationally expensive than an optimal solution.

                                                              The last category is particularly interesting. A lot of fraud detection systems are like this: they’re spotting patterns and the attacker adapts to them pretty well. Spam filtering has been primarily driven by ML for a good 20 years (I think the first Bayesian spam filters might have been late ‘90s, definitely no later than 2002) because it’s trivial for a spammer to change their messages if you write a set of rules and much harder for you to change the rules. These things are not flawless for security because they’re always trailing indicators (the attacker adapts, then your defence adapts) but they’re great as a first line of defence. Project Silica at MSR one floor down from me used ML for their voxel recognition for data etched into glass to massively speed up their development flow: they could try new patterns as fast as they could recalibrate the optics and then retrain the same classifier and see how accurate it could be. A rule-based system might have been a bit more accurate, but would have required weeks of software engineering work per experiment.

                                                              Things like Dall-E fit into all three categories:

                                                              • Generating a set of rules for how to create art is a problem that various artistic movements over the centuries have tried and failed to do.
                                                              • If you really want an image with a particular characteristic, you probably need to hire an artist and have multiple rounds of iterations with them, but an image that’s more-or-less what you asked for and is cheap to generate is vastly cheaper than this and much better than no image.
                                                              • The prompt changes every time, requiring completely different output. Artistic styles change frequently and styles for commercial art change very rapidly. Retraining Dall-E on a new style is much cheaper than writing a new rule-based generator for that style.

                                                              I see ML as this decade’s equivalent of object orientation in the 1980s/ 1990s and FP in the last decade or so:

                                                              • Advocates promise that it can completely change the world and make everything better.
                                                              • Lots of people buy the hype and build stuff using it.
                                                              • A decade or so later, it’s one of the tools in a developer’s toolbox and people accept that it’s really useful in some problem domains and causes a lot of problems if applied in the wrong problem domain.
                                                              1. 2

                                                                As far as I can tell, software that worked 99% of the time would generally be an improvement.

                                                                1. 3

                                                                  As far as I can tell, software that worked 99% of the time would generally be an improvement.

                                                                  That’s an obvious nonsense. Imagine routers only routed 99% of traffic correctly. After just 4 hops TCP would break down. We are very close to 100% everywhere it matters enough for people to care.

                                                                  You will get at most 95% at typical ML tasks while people care.

                                                                  ML models also tend to suck at failing. Typical router will just reject unintelligible packets while ML model will do something totally arbitrary. Such as classifying furniture as animals.

                                                                  What I mean is: please don’t use ML for ABS and always make it so that it assists real people, never let it run unattended.

                                                                  1. 2

                                                                    It’s also worth bearing in mind that 1% in terms of a particular sample doesn’t mean 1% in the real world. There are probably no bug-free routers today, but if you buy ten good routers, it’ll take years to get all ten of them to incorrectly route packets due to implementation bugs, and it’ll involve quite a lot of reverse engineering and testing. Meanwhile, you can get most 2.0 software (!?) to fail in all sorts of funny ways with a few hours of trial and error, and I guarantee that each of them is backed by a slide deck that claimed 99.99% accuracy on ten different data sets in an internal meeting.

                                                                    Bugs in the implementation of a model tend to cluster in poorly-understood or unexamined areas of the model, and you can usually tell which ones they are without even running the software, just reading the code, the source code repository history, and doing a literature survey if it’s a well-documented problem (like routing). Figuring that out on statistical models usually devolves into an exercise in stoner philosophy at the moment.

                                                                  2. 1

                                                                    As a general statement that is probably true.

                                                                    However, we can prove (or disprove) 100% correctness of traditional software. We don’t do it for all software because it’s hard but we know how to do it in principle. At the same time interpretability is an open problem in ML. We can reverse engineer (more like guess) the algorithm encoded in some simplest models but it’s far from perfect. The algorithm is approximated most of the time and not exact. It can differ subtly from a classical algorithm we infer. And we can’t do it for big models like GPT-3. We also can’t do it reliably even for all simple models. So it might look like model works 99% of the time but you can’t rigorously prove it does or that it’s actually 99%.

                                                                    1. 2

                                                                      I think you can only (trivially) disprove it by producing a counter-example on which it fails. The example that springs to mind is with facial recognition that was trained on mostly white faces failing on black faces.

                                                                      There might be ways to construct such examples like the “adversarial models” to cause image recognition to fail.

                                                                      1. 1

                                                                        The adversarial model problem strikes me as a hard one to ever solve because any attacker with access to the original model can just use it to train an adversary.

                                                                        1. 1

                                                                          Then I’d rather say instead of it being 99% correct, it’s actually 100% incorrect, because it can never be fixed. At least if traditional software has a bug, you can fix it.

                                                                          1. 2

                                                                            Well, you just train the model against the adversary, obviously. :-)

                                                                  3. 1

                                                                    This seems like the crux of it, though? If we don’t understand how it works and it can fail in unintuitive and embarrassing ways, how can we actually trust it?

                                                                    IMO, the crux of it is that “software 2.0” is good at solving a class of problems that are commercially relevant, and that “software 1.0” is not so good at. Typically domains where we’ve needed expensive humans to do things, and in which human practitioners have developed a great deal of tacit knowledge about how to perform their tasks that are hard to make explicit. It really is incredible that we’ve now got a generalizable approach for automating things that used to require human practitioners with a great deal of human experience.

                                                                    But in domains where explicit knowledge is more important, I’d think “software 1.0” will dominate. Though, if AGI ever becomes practical / powerful enough, I don’t discount the idea of “software 2.0” AGI programmers developing (in partnership with humans, at least at first) “software 1.0” systems.

                                                                    Anyway, to respond to your actual point, “how can we actually trust it?”:

                                                                    1. We won’t necessarily have a choice. Economics being what they are, and human drive being what it is, a technique being more effective more easily will result in its winning in its niche, whether or not that’s considered good for us. I can probably mock up a prisoners dilemma scenario to illustrate this better, but I’m already writing too much.
                                                                    2. At some point of examination, trust will break down, in any system. We probably all know about https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf. In math, ZFC set theory is a major foundation of many results, but then there’s this: https://scottaaronson.blog/?p=2725. IMO, the reasonable approach to trusting “software 2.0” systems is similar to the way we establish faith in the sciences: through hypothesis generation and statistical testing.
                                                                    1. 1

                                                                      In the mean time, I believe we’ll soon experience a new AI winter. All it takes is one spectacular failure, or a huge money sink not paying off, like, let’s say, self-driving cars.

                                                                  1. 13

                                                                    I’m wary of this argument because it proves too much – for any given US law that bans something, the state of IOT/programmable-everything/etc. is such that it’s probably possible to write a computer program to automate doing the banned thing, and then attempt to hide behind “code is speech” arguments. And I’m fairly certain the EFF doesn’t intend to argue that effectively all US law is invalid under the First Amendment. But that’s unfortunately where the argument leads when taken to its logical conclusion.

                                                                    Plus, Tornado Cash and other cryptocurrency “mixers” have facilitating money laundering as one of their primary acknowledged-by-everyone use cases. It should be surprising to nobody that the US government eventually cracked down, and attempting to continue developing and improving the money-laundering tool under the retroactive justification of a research project doesn’t strike me as particularly principled or particularly likely to succeed.

                                                                    Or, more simply: I’m sure that to some people it would be a fascinating research project to figure out how to build better money laundering systems. But calling it a research project wouldn’t make it principled or legal to do so, and shouldn’t be a defense to enforcement of the law against the “project”.

                                                                    1. 8

                                                                      IANAL

                                                                      Publishing code to automate something is not the same as doing that something, is it?

                                                                      Is publishing a book on how to burn babies protected by the First Amendment? Is it the same as actually burning babies? I’m not asking in a moral sense. Only from the legal PoV. I’m also pretty sure burning babies is already a criminal offence.

                                                                      Likewise, Tornado might state that its primary use is money laundering but is publishing the code the same as laundering money?

                                                                      1. 4

                                                                        IANAL either

                                                                        As far as I know, such things fall under the heading “what is its most primary purpose?”. If you are publishing code clearly designed to help people break the law, you are aiding and abetting these people, and a judge would not look kindly upon you or your “free speech” defense. If you are publishing code that could be used to break the law but is also commonly used for other things, it’s fine.

                                                                        And “free speech” doesn’t cover everything, even in the USA - you can be sued for libel and slander, for example.

                                                                        1. 4

                                                                          ditto IANAL

                                                                          In the US, structuring goes out to a second order: “structure or assist in structuring, or attempt to structure or assist in structuring.” It seems to me that the argument that a machine purpose built to facilitate money laundering (which is definitionally what’s happening with Tornado, whether the input is illicit or not) is pretty well captured there.

                                                                          You need a very odd view of the world to think that putting both dirty and clean money into a box and shaking it makes the money come out clean just because you directed a computer do the shaking.

                                                                          1. 1

                                                                            Your argument is reasonable but some edge cases are still not clear.

                                                                            If I find a software vulnerability and publish a proof-of-concept exploit is that illegal because the primary purpose of the code is aiding breaking of the Computer Fraud and Abuse Act? The most obvious defense is “the code wasn’t intended to be used, just to show how it could be done” which I am morally OK with but struggle to find the tangible difference when applied to a PoC vs the baby burnomatic.

                                                                            1. 1

                                                                              A proof of concept tends to be just that, not a weaponised point-and-click exploit that can take over a remote machine. The latter would definitely be closer to the “baby burnomatic”. This is also why traditionally, PoCs that are actually harmful to run would often contain deliberate “mistakes”, to make them not readily usable. But it’s a grey area, for sure.

                                                                        2. 5

                                                                          Plus, Tornado Cash and other cryptocurrency “mixers” have facilitating money laundering as one of their primary acknowledged-by-everyone use cases.

                                                                          I disagree. Everything is public on Ethereum, if I send you some Ether and can look at the sending address and see all of its activity. There are plenty of reason to want privacy and this is one of the best tools for privacy on Ethereum. Privacy is the primary use-case here.

                                                                          1. 5

                                                                            So why would you use blockchain to send money anonymously if blockchain is not anonymous? If you use normal bank transfer or services like paypal your activity will be hidden from general population.

                                                                            1. 3

                                                                              l bank transfer or services like paypal

                                                                              I prefer opensource and censorship resistances methods of sending assets. Also being able to automate things by writing smart contracts is nice ( the ecosystem is still immature though).

                                                                              1. 2

                                                                                Do you consider fighting money laundering a form of censorship that you want to avoid?

                                                                                1. 2

                                                                                  This is really about the right to be able to preform an private transaction. The majority of transaction through TC where not from malicious actors. Further more the sanction law works anyone who receives money from TC is criminal liable, which is pretty crazy b/c anyone can send you funds from TC.

                                                                                  1. 2

                                                                                    This is really about the right to be able to preform an private transaction.

                                                                                    What right are you referring to? I think that you have a right to conceal your financial activity from other citizens but I don’t think you have a right to conceal it form state in all possible situations. It’s my understanding that in usa, financial entities are required by law to do various forms of reporting (see for example https://en.wikipedia.org/wiki/Bank_Secrecy_Act). I doubt that the TC entity fulfilled any of that requirements.

                                                                                    The majority of transaction through TC where not from malicious actors.

                                                                                    Yes, but at the same time ~15% of transactions volume is suspected (known?) to have been from organized crime. I don’t really see how else this could have ended if TC by its very definition is about avoiding required reporting.

                                                                                    1. 2

                                                                                      What right are you referring to?

                                                                                      I simple mean it in the colloquial sense.

                                                                                      I don’t really see how else this could have ended if TC by its very definition is about avoiding required reporting.

                                                                                      TC allowed you to generate an audit that was a proof of what address the assets came from. If an auditor requested you to prove the source of your funds you could selectively reveal to that person or entity the source. You can still do required reporting with TC without losing totally anonymity.

                                                                                      1. 1

                                                                                        Please read about required laws mentioned above. Those laws are not about you voluntarily generating some reports. Those reports should be generated without your knowledge:

                                                                                        There are also penalties for banks who disclose to its client that it has filed a SAR about the client.

                                                                                        and should contain detailed information on both ends of transactions:

                                                                                        CTRs include an individual’s bank account number, name, address, and social security number.

                                                                                        None of this is what TC can do. Mixers are by definition designed to be illegal to operate in most jurisdictions.

                                                                                        1. 2

                                                                                          Mixers are by definition designed to be illegal to operate in most jurisdictions.

                                                                                          “If a law is unjust, a man is not only right to disobey it, he is obligated to do so.”

                                                                        1. 5

                                                                          Nice bit of investigation but I can’t help but feel like the author is too eagerly laying sticks of TNT under every post of Chesterton’s fence to save 7 seconds maybe once a week.

                                                                          1. 3

                                                                            There’s some magical threshold where things are trivially slightly broken where they really shouldn’t be… so the issue is really insignificant yet extremely annoying. I’ve definitely done things like that in the past - workaround a 2s delay where it really shouldn’t happen, because it annoys me more than a 2min “actually broken” process would.

                                                                            Just today I was about to dive into debugging whytf does wordle website freeze for ~4s every few days, because it’s extremely infuriating. People’s behaviours are weird.

                                                                            1. 5

                                                                              I absolutely get the frustration. Everyone has the point when they stand up to death by thousand papercuts. I’m just objecting to casually suggesting disabling SIP without as much as linking to what it is and what it does. From the OP it looks a bit like its only function is to be annoying and make git slow twice a month.

                                                                            2. 3

                                                                              It reads to me like an recreational investigation into the shitshow of complexity that modern computer systems and OSes are.

                                                                            1. 3

                                                                              While this article is specifically regarding the 0.100 release of SecureStore for rust, SecureStore itself is a language-agnostic open protocol for a plain-text, git-versioned alternative to storing secrets as environment variables (which is inherently insecure and prone to leakage) or via network-accessible secret management servers (which are a really heavy dependency, incur a fairly significant devops cost, and are overkill for most companies’ needs). SecureStore implementations are available for other languages, and the protocol was designed from the ground up to be portable, git-friendly, and easy to use.

                                                                              SecureStore vaults unify the way passwords are stored and retrieved in-development and in-production and make spinning up development environments really easy (since the secrets are actually cloned alongside the code when you just check out the repo, you just need to know the password or be given a single decryption key). The git versioning makes sure that your secrets are committed at the same time as the code that uses them, and the usage of separate vaults for production and dev use means you can separate who has access to what. Even if your secrets change daily, the master decryption key is generally static (unless you rotate it) and so you only need to deploy the production keyfile once to newly imaged servers via CI or out-of-band.

                                                                              1. 1

                                                                                I think I can guess but maybe you can confirm. Does it support multiple passwords/keys to decrypt the secrets? E.g. each developer has their own password they don’t share with anyone.

                                                                                1. 2

                                                                                  Not with one vault, no. Typically you’d have a dev vault with a password shared with everyone with repo write access and a staging/prod vault only authorized devs trusted with remote access to prod servers, protected with its own password (or no password and keyfile only for even greater security).

                                                                              1. 19

                                                                                Started like a solid exploration of the device, nice technical writeup. Went into an OK business introspection. And the last 1/3 is a whole lot of drama. 5/7 with rise.

                                                                                1. 5

                                                                                  It seems GNOME’s building for a wide audience of “normies” while their actual users are “geeks”. Their hear in the right place wanting accessible and nice looking UI but the completely miss what their users want. They want freedom to tinker and break their stuff at expense of accessibility and nice UI.

                                                                                  GNOME should stop fighting their users and stop breaking stuff out of spite. Any support request for a broken theme should be redirected to distros who shipped it. Yes, it’s a big burden and might look like finger pointing at times but so is the cost of FOSS. As OP rightly mentioned no one has infinite support capacity and most GNOME users understand that.

                                                                                  1. 18

                                                                                    I’m a GNOME user, very much a geek, and I love the direction they’re taking. I don’t want to mess with my UI, I want it to get the out of my way and let me use the computer. GNOME does that spectacularly well, much better than any other DE I have tried over the years. I love that I don’t have to tinker with it, because that lets me focus on what I want to do, rather than having to fight my DE. I do not enjoy tinkering with my desktop, it is not my area of interest. If it would be, I’d use something else, that’s the beauty of having a diverse set of options. That GNOME focuses on providing an accessible, consistent experience out of the box with only a few knobs to tweak, is great. It’s perfect for those of us - geek or non-geek alike, and anything inbetween - who just want to get shit done, and honestly not care about tweaking it to the last detail.

                                                                                    GNOME stays out of my way, doesn’t overwhelm me with tweaks and knobs I couldn’t care less about. It’s perfect. It’s perfect for me, a geek who keeps tweaking stuff that matters to him (like, my keyboard firmware is still not quite where I want it to be after half a decade of tweaking it). I love tinkering with things where tinkering makes sense. Tinkering with my firmware makes me more productive, and/or the experience more ergonomic, easier on my hands and fingers. Tinkering with my editor helps me get things done faster.

                                                                                    My DE? My DE stays out of my way, why would I want to tinker with that?

                                                                                    As for theming, I’d much prefer a single theme in light & dark variants where both of them are carefully designed, than a hodge-podge of half-broken distro-branded “stuff”. The whole “lets make the distro look different” idea is silly, if you ask me. A custom splash screen, or background, or something unobtrusive like that? Sure. But aggressively theming so it’s distro-branded? Nope, no thanks. I’d much prefer if it didn’t matter whether I’m using RedHat, Ubuntu, or whatever else, and my GNOME would look the same. That’s consistent. I don’t care about the brands, it’s not useful.

                                                                                    So, dear GNOME, please keep on doing what you’re doing. People who don’t like the direction, have alternatives, if they like to tinker so much, they can switch away too. Those of us who want something that Just Works, and is well designed out of the box, we’ll stay with GNOME.

                                                                                    1. 6

                                                                                      I think the problem is, you’re not getting a desktop you don’t have to fight, you’re just getting a desktop that you can’t fight.

                                                                                      1. 12

                                                                                        I am getting a desktop I don’t have to fight, thank you. I don’t want to fight it, either. If I wanted to, there are many other options. I prefer not to, and GNOME does what I need it to do. For me, that’s what matters.

                                                                                        It doesn’t work for everybody, and that’s fine, there are other options, they can use something that fits their needs better. But do let GNOME fit ours.

                                                                                        1. 4

                                                                                          I mean, I guess I just don’t see why removing options would give you a desktop that you don’t want to fight. You don’t have to fight KDE either. The only difference, aside default preference, is that you can fight KDE if you want to.

                                                                                          If Gnome can be a desktop you don’t have to fight without customisability, it can be a desktop you don’t have to fight with customisability just as easily.

                                                                                          1. 5

                                                                                            You misunderstood. I don’t care about customizability of my desktop. I want it to stay out of my way, and provide a nice, cohesive design out of the box. Simple as that. If the developers believe the best way to achieve that is libadwaita, I’m fine with that. I don’t want to tinker with my DE. If I have to, I’ll find one where I don’t.

                                                                                            Besides, libadwaita can be customised. Perhaps not themed, as in, completely change it, but it does provide the ability to customise it. Pretty much how macOS Carbon does customisation. Personally, I find libadwaita’s customisation a lot more approachable than GTK3’s theming. It’s simpler, easier to use.

                                                                                            1. 4

                                                                                              I think people misunderstand - it’s not just “less options as simple for user”, but also simpler for the people maintaining the application, as the application has less permutations of configuration to test and debug.

                                                                                        2. 4

                                                                                          And what happens if I’m using KDE and need to use a single GNOME app?

                                                                                          You install one GNOME app, which, so far, was automatically themed with Breeze and looked at least somewhat like a native app, and used native file pickers. Now with the recent GNOME changes, just installing a single GNOME app forces you to look at their theme, and forces you to use at their broken filepicker.

                                                                                          Apps should try to be native to whichever desktop they’re running it, they shouldn’t forcefully bring their own desktop into whatever environment they’re in.

                                                                                          GIMP isn’t using adwaita on Windows either, and neither should Bottles bring adwaita into my KDE desktop.

                                                                                          1. 12

                                                                                            And what happens if I’m using KDE and need to use a single GNOME app, and now I’m forced to look at their hideous and unusable adwaita theme?

                                                                                            Then you go and write - or fund - a KDE alternative if you hate the GNOME look so much, and there’s no KDE alternative.

                                                                                            GNOME is like a virus, it infests your desktop more and more.

                                                                                            Every single toolkit is like that.

                                                                                            QT isn’t any different. macOS’s widget set isn’t any different. Windows’ isn’t any different. They all look best in their native environments, and they’re quite horrible in others. The macOS and Windows widgets sets aren’t even portable. QT is, but even when it tries to look native, it fails miserably, and we’d be better off if it didn’t even try. It might look out of place then, but it would at least be usable. Even if it tries to look like GNOME, it doesn’t, and just makes things worse, because it looks neither GNOME-native, nor KDE/QT-native, but a weird mix of both. Yikes.

                                                                                            GNOME is doing the right thing here. Seeing apps of a non-native widget set try to look native is horrible, having to fight to make them use their native looks rather than try - and fail - to emulate another is annoying, to say the least. I’d much prefer if QT apps looked like QT apps, whether under KDE or GNOME, or anywhere else.

                                                                                            The only way to have a consistent look & feel is to use the same widget set, because emulating another will always, without exception, fail.

                                                                                            Now with the recent GNOME changes, just installing a single GNOME app forces you to look at their theme, and forces you to use at their broken filepicker.

                                                                                            Opinions. I see no problem with the GNOME file picker. If you dislike it so much, don’t install GNOME apps, help write or fund alternatives for your DE of choice.

                                                                                            Apps should try to be native to whichever desktop they’re running it, they shouldn’t forcefully bring their own desktop into whatever environment they’re in.

                                                                                            No, they should not. Apps should be native to whichever desktop they were designed for. It is unreasonable to expect app developers to support the myriad of different desktops and themes (because we’d have to include themes then, too).

                                                                                            KDE/QT apps bring their own desktop to an otherwise GNOME/GTK one. Even if they try to mimic GNOME, the result is bad at best, and we’d be better of if they didn’t try. GNOME is doing the right thing by not trying to mimic something it isn’t and then fail. It stays what it is, and so should QT apps, and we’d be free of the broken stuff that stems from apps trying to pretend they’re something they really are not.

                                                                                            GIMP isn’t using adwaita on Windows either

                                                                                            Last I checked, GIMP isn’t even using GTK4 yet to begin with, so it doesn’t use libadwaita anywhere. They didn’t make a windows-exception, they just didn’t port GIMP to GTK4 yet. Heck, the stable version of it isn’t even GTK3, let alone 4.

                                                                                            1. 3

                                                                                              help write or fund alternatives for your DE of choice.

                                                                                              Considering the funding for open source projects is limited, this means I’ll have to try to get Gnome users to stop donating to Gnome, and instead donate for my own project. I’m not sure if you actually want that to happen (because it’d mean I’d have to actively try to defund Gnome).

                                                                                              It’d be much better if we just had one, well-funded project that looks native in multiple DEs, than separate per-DE projects

                                                                                              1. 5

                                                                                                Considering the funding for open source projects is limited, this means I’ll have to try to get Gnome users to stop donating to Gnome

                                                                                                Huh? Why? They use GNOME, why would they want to fund something else? People should help projects they use.

                                                                                                and instead donate for my own project.

                                                                                                Find your own users. Seeing the backlash against GNOME - usually from people not even using GNOME - suggests that there’s a sizable userbase that would be interested in having alternatives to some applications that do not have non-GNOME alternatives. Perhaps that’s an opportunity there.

                                                                                                1. 1

                                                                                                  Huh? Why? They use GNOME, why would they want to fund something else? People should help projects they use.

                                                                                                  The absolute majority of GNOME users only use it because they either don’t know of alternatives, or because they have to use a few GNOME apps because there’s no alternative. If true alternatives existed, a lot of people would stop using and funding GNOME.

                                                                                                  (This sentence was written by me using Budgie, which uses parts of GNOME, solely because I need to run a GTK based desktop just for one single app that doesn’t properly work otherwise. If I could, I’d never touch Gnome or GTK, ever)

                                                                                                  1. 6

                                                                                                    The absolute majority of GNOME users only use it because

                                                                                                    Do you have a credible source for that? Because my experience is the exact opposite. Every GNOME user I know (with wildly varying backgrounds), are aware of alternatives, yet, they use GNOME, and are in general, happy with it.

                                                                                                    If true alternatives existed, a lot of people would stop using and funding GNOME.

                                                                                                    I very much doubt that people who otherwise wouldn’t use GNOME, would fund it.

                                                                                                    solely because I need to run a GTK based desktop just for one single app that doesn’t properly work otherwise

                                                                                                    I very much doubt that there’s a GTK app that cannot be used unless you run a full GTK desktop. Link, please?

                                                                                                    1. 2

                                                                                                      n=1, but the reason I threw up my hands and stuck with GNOME on Fedora 36 was because my custom theme wasn’t entirely broken. Some apps use libadwaita and stick out like a sore thumb, though at least I can still move the window buttons to the left which is where I prefer them (for now?), but others still use the theme, and my system-wide font choices are apparently still honoured (again, for now?). But none of this means I don’t think that their UI choices are wasteful of space or find some of their design decisions personally suspect. I tolerate it, but I’m increasingly not happy with it, and eventually it will exceed my daily inertia. I have a custom window manager I’ve been working on, and I might be able to make KDE into enough of what I want that I have alternatives.

                                                                                                      1. 7

                                                                                                        You dislike the direction GNOME is taking then. That’s fine, and understandable: neither the looks, nor their approach suits everybody. Thankfully, in the free software world, there are alternatives.

                                                                                                        I hate that KDE has so many knobs, it’s overwhelming and distracting. The default theme looks horrible too, in my opinion. So I don’t use KDE, because I accept that I’m not their target audience. I don’t complain about it, I don’t hate on them, I am genuinely happy they take a different approach, because then other people can choose them.

                                                                                                        Sometimes the DE we use takes a different direction than one would like. That’s a bit of a bummer, but it happens. We move on, and find something else, because we can. Or fork, that happened too before, multiple times.

                                                                                                        Taking a different direction is not wrong. It’s just a different direction, is all. You may not like it, there are plenty who do.

                                                                                              2. 1

                                                                                                The macOS and Windows widgets sets aren’t even portable.

                                                                                                Tell that to the wine darlings.

                                                                                                1. 3

                                                                                                  Apps running under Wine stick out like a sore thumb if they’re not basically compositing everything, in which case it’s at least on purpose. I believe that was Algernon’s point.

                                                                                                  1. 1

                                                                                                    Then every widget set is cross platform, because we can just run stuff in emulators. Good luck trying to look native then!

                                                                                                    1. 4

                                                                                                      run stuff in emulators

                                                                                                      wine is not an emulator. It is an implementation of the Windows library on top of Linux. It is exactly as equally “native” as GTK and Qt, which are also just libraries implemented on top of Linux.

                                                                                                      The only question is what collection of applications you prefer. That’s really how native is defined on the linux desktop - that it fits in with the other things you commonly use.

                                                                                                2. 3

                                                                                                  I mean you’re the one choosing to use a Gnome app. “A Gnome app looks like a Gnome app” is, at its core, something that makes sense imo.

                                                                                                  That said I would like for there to be more unification on the low hanging fruit.

                                                                                              3. 10

                                                                                                It’s not “spite” - there are a million Linux desktops for tinkering and breaking. Give “normies” something productive and usable in the meanwhile and they might not all neglect what could be the best platform for their purposes. I use Gnome 4(?) on Wayland and it’s great - I had it basically looking clean enough as macOS without the ugly icons in like 10 minutes. Real geeks waste their time in the terminal anyway, not customising it. (:p)

                                                                                                1. 6

                                                                                                  It’s not “spite”

                                                                                                  Well, what is it then? For decades GNOME had flexibility, users created horribly broken themes and everyone was more or less happy. GNOME was happy to have users. Users were happy they had freedom to do whatever. Yes, not everything was perfect. Custom widgets were mostly broken, accessibility was lacking, etc.

                                                                                                  As I said, GNOME’s heart in the right place to want to have a working/accessible default but does it have to be at expense of flexibility? OP presents it as if there’s only two options: either we let users do whatever, or we have a good nice looking theme. And the main driving force behind the decision to remove configurability was distros having a bad default theme.

                                                                                                  I think GNOME is completely misguided in their approach. Instead of creating a good, pretty, accessible default theme and telling people use this if you want a good, pretty, accessible theme, they decided they won’t let distros break their default theme and lump in users into the distro category. It goes completely against the spirit of FOSS. Instead of creating better options for users they chose to remove options.

                                                                                                2. 8

                                                                                                  It seems GNOME’s building for a wide audience of “normies” while their actual users are “geeks”. Their hear in the right place wanting accessible and nice looking UI but the completely miss what their users want. They want freedom to tinker and break their stuff at expense of accessibility and nice UI.

                                                                                                  I mean, technical professionals are trying to get their job done. Give me a desktop that works well, and I don’t want to touch it beyond using it. I want to work with compilers, not window managers.

                                                                                                  1. 6

                                                                                                    Give me a desktop that works well, and I don’t want to touch it beyond using it. I want to work with compilers, not window managers.

                                                                                                    I’ve said before that this is why Apple ended up being the manufacturer of the default “developer laptop”. They never really set out to do that, they just wanted to make nice and powerfully-spec’d machines targeting a broad “pro” market. But as a result of accidents of their corporate history, they ended up doing what no Linux distro vendor ever managed: ship something that works well and is Unix-y enough for developers at the same time.

                                                                                                    I ran various Linux distros as my primary desktop operating system for much of the 00s, and I know my first experience with an Apple laptop and OS X was a breath of fresh air.

                                                                                                1. 3

                                                                                                  I kinda agree that JSON is not a native hypermedia but so is not HTML. Have you ever tried to encode any method other than GET of POST in pure HTML? Well, you can’t. So it turns out HTML is not a fully realized hypermedia format either. The OP links to another 7 posts trying to convince that HTML is the one true REST format and neglects to mention that you can only encode half of the method semantics.

                                                                                                  The author insists that the client needs all sorts of special knowledge to interpret JSON payloads but HTML is somehow natively understood. Well, it’s not if the client is not a browser. The client can very well understand some JSON with a schema that supports lining and method encoding, and whatever. And that API is very much RESTful even though not every client can use it.

                                                                                                  1. 4

                                                                                                    HTML is a native hypermedia in that it has native hypermedia controls: links and form. JSON does not. You can impose hypermedia controls on top of JSON, but that hasn’t been as popular as people expected.

                                                                                                    I agree entirely that HTML is a limited hypermedia, and, in particular, that it is silly that it doesn’t support the full gamut of HTTP actions. This one of the four limitations of HTML that htmx is specifically designed to fix (from https://htmx.org/):

                                                                                                    • Why should only and be able to make HTTP requests?
                                                                                                    • Why should only click & submit events trigger them?
                                                                                                    • Why should only GET & POST methods be available?
                                                                                                    • Why should you only be able to replace the entire screen?
                                                                                                    1. 2

                                                                                                      I get what htmx is trying to achieve. However, it doesn’t help with the REST narrative OP presents. It tries to convince us that REST is good and everyone is wrong about it (which is fine). But it also tries to convince us that HTML is the way while also being a thing on top of HTML to make it actually fulfil its role in REST.

                                                                                                      Let’s assume for the sake of the argument that htmx is the actual hepermedia format the REST requires. Does it make REST useful? To actually use the REST API we need a very special kind of agent: a conforming web browser with scripting enabled.

                                                                                                      Given that constraint it’s no wonder no one actually implements REST APIs. We have whole lot of clients that are not browsers: mobile clients that implement native UI and IOT devices that can not run a browser. And if we need to build an API for those that is not REST (by the OP’s definition) anyway then why bother building a separate REST API for the browser?

                                                                                                      I like the idea of REST. I believe it’s ideas are valuable and can guide API design. Insistence on a particular hypermedia format (HTML but, I guess, meaning htmx) is misguided.

                                                                                                  1. 1

                                                                                                    I wonder if I don’t understand something, but it is a second board with that strange components placement I’m seeing, the first being Radxa Rock Pi 4.

                                                                                                    The main SoC chip is on the bottom of the board, that means whatever cooling solution is there - it has to face downwards. Quite awkward with all the other wiring/gpio facing up.

                                                                                                    That board has an M.2 slot too, but it face outwards, meaning that SSD or something else won’t have anything physical to keep it down, and it is just flailing around.

                                                                                                    1. 3

                                                                                                      On the other hand, the main heat generating component has nothing in the way to apply a good cooling solution. It’s way harder to do it when you have your ports sticking in all the places around.

                                                                                                      1. 1

                                                                                                        Yeah the original rpi design was built around having no cooling solution; later tiny radiators appeared, but it still wasn’t an issue even in that form-factor. Now some of them are way too hot even for that, but they still stick to that credit-card size.

                                                                                                        1. 1

                                                                                                          I’ve got a couple rpi-4s in aluminum cases, and they seem to do fine with passive cooling.

                                                                                                      2. 1

                                                                                                        PCengines boards have the cpu and chipsed at the bottom so that they can be in contact (via a thermal pad) with the aluminium case, which serves as a heat sink. Seems to work pretty well. Speaking of which, that’s the x86_64 board I would recommend, it’s been rock solid. The only thing that some may find surprising is the use of the serial console instead of the video output.

                                                                                                        1. 1

                                                                                                          Yeah I’ve seen those, pretty cool! Makes sense there because it is not of rpi form-factor. Serial console is a given with that kind of boards for me :)