1. 3

    I picked up kernel development again. I scrapped most of the code I had from the kernel and replaced it with some cleaner code. (Thanks to https://os.phil-opp.com/) Target this time is to include WebAssembly support, there is nebulet but I want a bit more…

    1. 1

      Sad to hear it, he was one of a kind. Is there any effort to archive his work and preserve the site? His family might not want to keep paying the hosting bill forever.

      1. 2

        archive.org has a large collection of his videos and a lot of TempleOS isos.

        I have some of his videos, curated, missing most livestreams. I prefer the more in-depth videos about TOS.

      1. 1

        I get that mental illness gives old mate a pass on the racist diatribes, but most of those “features” are really bad ideas.

        1. 7

          As the article put it:

          Don’t write things off just because they have big flaws.

          That said, would you please expand on why most of the features are really bad ideas?

          1. 11

            I may be the only user of my computer, but I still appreciate memory protection.

            1. 5

              More to the point: Practically every, if not every, security feature is also an anti-footbullet feature. Memory protection protects my data from other people on the system and allows security contexts to be enforced, and it protects my data from one of my own programs going wrong and trying to erase everything it can address. Disk file protections protect my data from other users and partially-trusted processes, and ensure my own code can’t erase vital system files in the normal course of operation. That isn’t even getting into how memory protection interacts with protecting peripheral hardware.

              Sufficiently advanced stupidity is indistinguishable from malice.

              1. 15

                But that’s not really the point of TempleOS, is it?

                As Terry once mentioned, TempleOS is a motorbike. If you lean over too far you fall off. Don’t do that. There is no anti-footbullet features because that’s the point.

                Beside that, TOS still has some features lacking in other OS. Severely lacking.

                1. 1

                  Beside that, TOS still has some features lacking in other OS. Severely lacking.

                  Like?

                  1. 12

                    The shell being not purely text but actual hypertext with images is lacking in most other os by default and I would love to have that.

                    1. 6

                      If you’ve never played with Oberon or one of its descendant systems, or with Acme (inspired by Oberon) from Rob Pike, you should give it/them a try.

                      1. 0

                        If you start adding images and complex formatting in to the terminal then you lose the ability to pipe programs and run text processing tools on them.

                        1. 13

                          Only because Unix can’t comprehend with the idea of anything other than bags of bytes that unformatted text happens to be congruent with.

                          1. 4

                            I have never seen program composition of guis. The power of text is how simple it is to manipulate and understand with simple tools. If a tool gives you a list of numbers its very easy to process. If the tool gives you those numbers in a picture of a pie chart then it’s next to impossible to do stuff with that.

                            1. 7

                              Program composition of GUIs is certainly possible – the Alto had it. It’s uncommon in UNIX-derived systems and in proprietary end-user-oriented systems.

                              One can make the argument that the kind of pipelining of complex structured objects familiar from notebook interfaces & powershell is as well-suited to GUI composability as message-passing is (although I prefer message-passing for this purpose since explicit nominal typing associated with this kind of OO slows down iterative exploration).

                              A pie chart isn’t an image, after all – a pie chart is a list of numbers with some metadata that indicates how to render those numbers. The only real reason UNIX doesn’t have good support for rich data piping is that it’s hard to add support to standard tools decades later without breaking existing code (one of the reasons why plan9 is not fully UNIX compatible – it exposes structures that can’t be easily handled by existing tools, like union filesystems with multiple files of the same name, and then requires basically out-of-band disambiguation). Attempts to add extra information to text streams in UNIX tools exist, though (often as extra control sequences).

                              1. 3

                                Have a look at PowerShell.

                                1. 3

                                  I have never seen program composition of guis. The power of text is how simple it is to manipulate and understand with simple tools. If a tool gives you a list of numbers its very easy to process. If the tool gives you those numbers in a picture of a pie chart then it’s next to impossible to do stuff with that.

                                  Then, respectfully, you need to get out more :) Calvin pointed out one excellent example, but there are others.

                                  Smalltalk / Squeak springs to mind.

                                  1. 2

                                    Certainly the data of the pie chart has to be structured with such metadata that you can pipe it to a tool which extracts the numbers. Maybe even manipulates them and returns a new pie chart.

                                2. 3

                                  You don’t loose that ability considering such data would likely still have to be passed around in a pipe. All that changes is that your shell is now capable of understanding hypertext instead of normal text.

                                  1. 1

                                    I could easily imagine a command shell based on S-expressions rather than text which enabled one to pipe typed data (to include images) easily from program to program.

                              2. 1

                                But why do I want that? It takes me 30 seconds to change permissions on /dev/mem such that I too can ride a motorbike without a helmet.

                                1. 2

                                  That is completely beside the point. A better question is how long would it take you to implement an operating system from scratch, by yourself, for yourself. When you look at it that way, of course he left some things out. Maybe those things just weren’t as interesting to him.

                                  1. 1

                                    You could do that, but in TOS that’s the default. Defaults matter a lot.

                                    1. 2

                                      /dev/mem more or less world accessible was also the default for a particular smartphone vendor I did a security audit for.

                                      Defaults do matter a lot…

                                2. 8

                                  If there are no other users, and it takes only a second or two to reload the OS, what’s the harm?

                                  1. 6

                                    Its fine for a toy OS but I dont want to be working on real tasks where a bug in one program could wipe out everything I’m working on or corrupt it silently.

                                    1. 11

                                      I don’t think TempleOS has been advertised as anything other than a toy OS. All this discussion of “but identity mapped ring 0!” seems pretty silly in context. It’s not designed to meet POSIX guidelines, it’s designed to turn your x86_64 into a Commodore.

                              3. 2

                                Don’t write things off just because they have big flaws.

                                That’s pretty much the one and only reason where you would want to write things off.

                                1. 14

                                  There’s a difference between writing something off based on it having no redeeming qualities and writing something off because it’s a mixed bag. TempleOS is a mixed bag – it is flawed in a generally-interesting way. (This is preferable to yet another UNIX, which is flawed in the same boring ways as every other UNIX.)

                              4. 2

                                This is probably not what you meant to imply, but nobody else said it, so just to be clear: Mental illness and racism aren’t correlated.

                                1. 2

                                  Whatever is broken inside somebody to make them think the CIA is conspiring against them, I find it hard to believe that same fault couldn’t easily make somebody think redheads are conspiring against them.

                                  1. 2

                                    You’re oversimplifying. There are many schizophrenic people in the U.S., and most of them are not racist. Compulsions, even schizophrenic ones, don’t come from the ether, and they’re not correlated with any particular mental illness. Also, terry’s compulsions went far beyond paranoia.

                              1. 8

                                To be fair, they should also mark as “Not Secure” any page running JavaScript.

                                Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
                                (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

                                1. 11

                                  By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

                                  1. [Comment removed by author]

                                    1. 5

                                      Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.

                                      1. 3

                                        Why on earth would I give anyone else my private certificate?

                                        1. 4

                                          Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)

                                          Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)

                                          Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)

                                          Lastly, of course, are Terms Of Service, different from the above by at least being above-board.

                                      2. 2

                                        No.

                                        It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

                                        1. 11

                                          With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

                                          1. 1

                                            Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                                            Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                                            What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

                                          2. 3

                                            WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                                            I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                                            1. 4

                                              CDNs are man-in-the-middle attacks.

                                          3. 1

                                            As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                                            1. 0

                                              MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                                              Well… how can I say that… I don’t think so.

                                              Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                              Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                                              If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                                              Is this something webmasters should care? I think so.

                                              1. 4

                                                Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                                                1. 5

                                                  There is an entire industry around products that do this

                                                  There is an entire industry around rasomware. But this does not means it’s a security solution.

                                                  1. 1

                                                    It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                                                    What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                                                    1. 1

                                                      I wonder if you did read the articles I linked…

                                                      The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                                                      In this context, we need to grant to people accessibility and security.

                                                      An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                                                      1. 1

                                                        I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                                                        I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                                                        And that is even without going into which content is safe to be cached in a given environment.

                                                        1. 1

                                                          And that is even without going into which content is safe to be cached in a given environment.

                                                          Yes, this is the best objection I’ve read so far.

                                                          As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                                                          But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                                                2. 2

                                                  HTTPS proxy isn’t incompetence, it’s industry standard.

                                                  They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                                                  Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                  1. 2

                                                    Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                    Browsers bypass the network configuration to protect the users’ privacy.
                                                    (I agree this is stupid, but they are trying to push this anyway)

                                                    The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                                                    It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                                                    And, doing that in a school or a public library is dangerous and plain stupid.

                                                    1. 0

                                                      Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                                                      Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                                                      Browsers bypass the network configuration to protect the users’ privacy.

                                                      Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                      1. 1

                                                        Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                                                        Yes this is true.

                                                        If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                                                        Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                        Did you know about Firefox’s DoH/CloudFlare affair?

                                                        1. 2

                                                          Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                                                          It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                                                          1. 1

                                                            TBH, I don’t know what you mean with “security maximalism”.

                                                            I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                                            Mozilla has a contract with CloudFlare to protect the user privacy

                                                            It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                            AFAIK, even Facebook had a contract with his users.

                                                            Yeah.. I know… they will “do no evil”…

                                                            1. 1

                                                              Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                              It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                              Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                                              AFAIK, even Facebook had a contract with his users

                                                              Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                                              1. 1

                                                                Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                You should define “common user”.
                                                                If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                                                The problem is for those people who are actually useful to the society.

                                                                Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                                                The problem with Cloudflare is not what they did, it’s what they could do.
                                                                There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                But my concerns are with Mozilla.
                                                                They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                1. 1

                                                                  So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                  Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                                                  There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                  Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                                                  they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                  You mean safe because everyone involved knows what’s happening?

                                                                  1. 1

                                                                    I don’t believe the concerns are really concerns for the common user.

                                                                    You should define “common user”.
                                                                    If you mean the politically inepts who are happy to be easily manipulated…

                                                                    So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                    I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                                                    Let’s assume the first… for now.

                                                                    I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                                                    That’s it.

                                                                    they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                    You mean safe because everyone involved knows what’s happening?

                                                                    Really?
                                                                    Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                                                    A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                                                    As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                                                    I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                                                    All this is very suspect for a company that claims to care about users’ privacy!

                                                                    1. 0

                                                                      I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                                                      I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                                                      Are you sure everyone understand what is a MitM attack?

                                                                      An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                                                      Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                                                      Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                                                      And it extends the attack surface, both for the users and the company.

                                                                      And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                                                      And they ship WebAssembly.

                                                                      And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                                                      And you have to edit about:config to disable JavaScript…

                                                                      Or install a half-way competent script blocker like uMatrix.

                                                                      All this is very suspect for a company that claims to care about users’ privacy!

                                                                      I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                                                      1. 1

                                                                        An attack requires an adversary, the evil one.

                                                                        According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                                                        It shows very well your understanding of security.

                                                                        The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                        I have on concerns about WebAssembly.

                                                                        Not a surprise.

                                                                        Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                        Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                        As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                        1. 1

                                                                          According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                                                          If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                                                          It shows very well your understanding of security.

                                                                          My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                                                          There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                                                          The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                          Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                                                          Malory sits between Eve and Bob not Bob and Alice.

                                                                          Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                          I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                                                          It’s not my duty or problem to debug web applications that I don’t develop.

                                                                          Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                          Then don’t do it? Nobody is forcing you.

                                                                          As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                          I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                                                  2. 2

                                                    My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                                                    1. 3

                                                      With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                                                      The lack of awareness makes MitM caching worse.

                                              1. 4

                                                I’m gonna sit in the smug corner for people running AMD.

                                                Otherwise, this is all kinds of “very very bad”. The kind of where in a cartoon you’d have sirens spin up to warn of incoming air raids.

                                                1. 3

                                                  Because SEV has been that much better?

                                                  1. 2

                                                    The most secure system is the system no one uses :)

                                                1. 1

                                                  Thank you for your work! I was almost going to subscribe to a lot of them individually.

                                                  1. 1

                                                    That’s quite a neat survey with interesting results, despite the admitted bias of the author. Maybe they can redo it in the future to see how it develops and improve on the survey itself…

                                                    1. 5

                                                      With waterfall, sure you can get a lot done with a beaming goal post in the distance, but is work done on the wrong things considered throughput? Agile should help resolve issues that crop up along the way, and as such throughput might be a bit less, but the idea is that you can change focus and direction as programs get tested, whereas with waterfall you’ll finish a product fast, only to realize that you have to undo a lot of the work done to get back to a place where you can change it. This grossly oversimplifies the whole process, but that is the ups and downs. The agile upside is better direction and “correct” throughput, at the cost of project visibility into the future.

                                                      1. 1

                                                        Depends on how you do waterfall, just like it depends on how you do agile.

                                                        With waterfall you can still have all those things agile does, weekly meetings, burndown charts, even sprints. The important thing behind waterfall is that right at the very beginning you have a complete description of everything the program will be doing. When you do the programming, you do all the programming. Once it’s done you test it and make little adjustments where things are going wrong. Once you did the testing, you hand over to the customer and do the next project.

                                                        In comparison, Agile asks you to regularly interact with the customer and see if the requirements evolved or changed, things that might happen when the customer sees the application evolve. You don’t do that in Waterfall.

                                                        Waterfall can be immensely powerful when deployed under the right circumstances. As does Agile.

                                                        I would also disagree that agile gives you more correct throughput, Waterfall can be just as good if your customer gives you a good handover. But this depends entirely on the customer.

                                                        1. 2

                                                          Mini-waterfall is agile in the large, and I think mini-waterfall is kinda underrated, especially in teams where the same people work together frequently over a long period:

                                                          “We all understand what this is, it’s going to take us 3-5 weeks, we might have a few questions along the way, but it should be done by about October”.

                                                          That can actually work out pretty well.

                                                          1. 1

                                                            I think we are pretty much in agreement. Waterfall puts a lot of responsibility on the customer knowing exactly what they want and how they want it, and some good architects that can create a specification that matches this. If all that is in order, waterfall should win hands down, and the testing at the end should be minor (some graphical design, a few discrepancies, etc.)

                                                            Where agile wins is if this is not the case, which unfortunately often happens. In cases where you are not producing for a customer (such as internal product development), it can also be hard to know in advance how a feature works out and you want feedback early and often. Agile then is better, to avoid working in the wrong direction and “correct the course” often, since there is no “X marks the spot” for crossing the finish line.

                                                        1. 8

                                                          KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.

                                                          1. 4

                                                            I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.

                                                            Not perfect, but works.

                                                            Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.

                                                            1. 2

                                                              Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.

                                                              Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.

                                                              1. 1

                                                                I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.

                                                              2. 2

                                                                I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.

                                                                I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.

                                                              1. 1

                                                                The website seems to have been taken down since I get a 403, maybe the author didn’t like being linked to Lobste.rs or they’re shy.

                                                                1. 1

                                                                  Works for me from here.

                                                                  1. 1

                                                                    Curiously it works from my phone.

                                                                    I guess my ip is blocked or something? Weird …

                                                                    1. 2

                                                                      Their hosting provider applies blocks rather … aggressively.

                                                                1. 11

                                                                  Git via mail can be nice but it’s very hard to get used to. It took me ages to set up git send-email correctly, and my problem in the end was that our local router blocked SMTP connections to non-whitelisted servers. This is just one way it can go wrong. I can inagine there are many more.

                                                                  And just another minor comment: Everyone knows that Git is decentralized (not federated, btw.), the issue is GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely. Centralized platforms have a unfair benefit in this perspective, since there’s only one voice, one claim and no way to question it. One has to assume they make sure that the accounts are all real, and not spam-bots, otherwise nothing makes sense.

                                                                  To overcome this issue, is the big task. And Email, which is notoriously bad at any identity validation, might not be the best thing. To be fair, ActivityPub, currently isn’t either, but the though that different services and platforms could interoperate (and some of these might even support a email-interface) seems at the very least interesting to me.

                                                                  1. 12

                                                                    Article author here. As begriffs said, I propose email as the underlying means of federating web forges, as opposed to ActivityPub. The user experience is very similar and users who don’t know how to or don’t want to use git send-email don’t have to.

                                                                    Everyone knows that Git is decentralized (not federated, btw.)

                                                                    The point of this article is to show that git is federated. There are built in commands which federate git using email (a federated system) as the transport.

                                                                    GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely

                                                                    Profiles can live on sr.ht, but on any sr.ht instance, or any instance of any other forge software which federates with email. A person would probably still have a single canonical place they live on, and a profile there which lists their forks of software from around the net. Commit stats are easily generated on such a platform as well. Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require.

                                                                    1. 4

                                                                      I don’t think that’s a strong argument in favor of git being federated. I don’t think it matters either.

                                                                      Git in of itself does not care about the transport. It does not care whether you use HTTP, git:// or email to bring your repo up to date. You can even use an USB stick.

                                                                      I’d say git is communication format agnostic, federation is all about standardizing communication. Using email with git is merely another way to pipe git I/O, git itself does not care.

                                                                      1. 2

                                                                        git send-email literally logs into SMTP and sends a patch with it.

                                                                        git am and git format-patch explicitly refer to mailboxes.

                                                                        Email is central to the development of Linux and git itself, the two projects git is designed for. Many git features are designed with email in mind.

                                                                        1. 4

                                                                          Yes but ultimately both do not require nor care about federation itself.

                                                                          send-email is IMO more of a utility function, git am or format-patch, which as you mention go to mailboxes, have nothing to do with email’s federated nature. Neither is SMTP tbh, atleast on the Client-Server side.

                                                                          They’re convenience scripts that do the hard part of writing patches in mails for you, you can also just have your mailbox on a usb stick and transport it that way. And the SMTP doesn’t need to go elsewhere either.

                                                                          I guess the best comparison is that this script is no more than a frontend for Mastodon. The Frontend of Mastodon isn’t federated either, Mastodon itself is. Federation is the server-to-server part. That’s the part we care about. But git doesn’t care about that.

                                                                          1. 9

                                                                            I see what you’re getting at. I have to concede that you are correct in a pedantic sense, but in a practical sense none of what you’re getting at matters. In a practical sense, git is federated via email.

                                                                          2. 3

                                                                            That various email utilities are included seems more like a consequence of email being the preferred workflow of git developers. I don’t see how that makes it the canonical workflow compared to pulling from remotes via http or ssh, git has native support for both after all.

                                                                        2. 1

                                                                          I belive that @tscs37 already showed that Git is distributed, since all nodes are equal (no distinction between clinets and servers), while a git networks can be structured in a federated fashion, ir even in a centralized one. What the transport medium has to do with this is still unclear in my view.

                                                                          Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require

                                                                          That’s exactly my point. GitHub offers a unit-standard, and easily recognisable and readable (simply because everyone is used to it). This has a value and ultimately a relevance, that can’t just be ignored, even if this reason is nonsense. It would just be another example of technical naïve.

                                                                          I’ve shown my sympathy for ideas like these before, ad I most certainly don’t want to make the impression of a GitHub apologist. All I want to remind people is that the Social aspect beyond necessity (builds, issue trackers, …) are all things one has to seriously consider and tackle when one is interested in offering an alternative to GitHub, with any serious ambitions.

                                                                          1. 1

                                                                            I don’t think sr.ht has to please everyone. People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                            1. 1

                                                                              I’m fine with people using mailing-list oriented solutions (the elitism might be a bit off-putting, but never mind). I just don’t think that it’s that much better than the GitPub idea.

                                                                              People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                              If having these so-called “meaningless social features” helps a project thrive, attract contributers and new users, I wouldn’t conciser these meaningless. But if that’s not what you are interested in, that’s just ok.

                                                                        3. 2

                                                                          our local router blocked SMTP connections to non-whitelisted servers

                                                                          The article says that sr.ht can optionally send the emails for you, no git send-mail required: “They’ll enter an email address (or addresses) to send the patch(es) to, and we’ll send it along on their behalf.”

                                                                          Also what mail transfer agent were you pointing git send-mail at? You can have it send through gmail/fastmail/etc servers – would your router block that?

                                                                          GitHub […] adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers

                                                                          How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                          Email […] is notoriously bad at any identity validation

                                                                          Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                          1. 1

                                                                            Also what mail transfer agent were you pointing git send-mail at?

                                                                            Fastmail. Was too esoteric for the default settings of my router. And if it weren’t for support, I would have never guessed that that was the issue, since the whole interface is so alien to most people (just like the questions: did I send the right commits, is my message formatted correctly, etc.)

                                                                            How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                            I’m not saying it’s perfect (again, I’m no GitHub apologist) – my point is that it isn’t irrelevant!

                                                                            Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                            Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                            1. 1

                                                                              Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                              This doesn’t damn the whole idea, it just shows us where open areas of development are.

                                                                        1. 7

                                                                          OpenID sadly never took of anywhere, the only provider that I know that offers and supports it is Uberspace and they are very niche.

                                                                          OAuth Connect is probably going to be a bit more successful since it piggybacks off OAuth2 internally.

                                                                          1. 7

                                                                            It looks like they haven’t even paid the previous fine and are still appealing: https://www.theverge.com/2017/9/11/16291482/google-alphabet-eu-fine-antitrust-appeal

                                                                            I guess this one will go the same way? Even if their appeals are unsuccessful, these fines are probably not a big deal if they are able to drag these things out for years (ie cost per year wouldn’t be that high). By the time they have to pay and change their practices, they might have some other strategy in place.

                                                                            This reminds me a lot of Microsoft of the 2000s.

                                                                            1. 6

                                                                              I’m not sure if its a recent change but Google will have to pay the fine into a trust account if they want to appeal. either way they will have to pay now and if the win the appeal then they get it back ( without interest)


                                                                              posted from my phone

                                                                              1. 2

                                                                                That’s good to know. It might be a bit more convincing then.

                                                                                1. 2

                                                                                  That’s great, apparently they did learn from Microsoft!

                                                                                2. 3

                                                                                  There are five different investigations the EU is doing into Google. They are at different stages. The previous fine is being appealed now. https://imgur.com/6uLtQX5

                                                                                  This chart comes from a WSJ article from the day of the fine announcement.

                                                                                  1. 1

                                                                                    They may be fined for every day of not complying. EU is effective: if there’s will to do so.

                                                                                  1. 5

                                                                                    It is, for example, perfectly possible for pro-life and pro-choice advocates to collaborate on a software project. They just have to leave their opinions about abortion at the door, and this should not preclude them from freely sharing those opinions on social media without fear of disciplinary reprisal

                                                                                    I wonder if this is true. Richard Stallman said, regarding an abortion joke in glibc:

                                                                                    GNU is not a purely technical project, so the fact that this is not strictly and grimly technical is not a reason to remove this.

                                                                                    I asked this:

                                                                                    must one have the same political views as Stallman to be part of the GNU project? What if we simply believe in the four software freedoms, is that not enough? Should members who are against abortion be excluded?

                                                                                    and he replied to every single post in the thread except mine, so I don’t know what the answer is.

                                                                                    1. 5

                                                                                      I wonder if this is true

                                                                                      It happens all the time in the general workplace. The South has one of the most heated histories you’ll find in things like race and gender issues. We mostly get along anywhere from tolerance to being friends. My government class was mostly split 50/50 on abortion debate with most staying friends after within days. Reading from people pushing CoC’s for political reasons, you’d think that was impossible. Yet, we do it every day in any places where peoples’ differences are tolerated. So, they’re wrong about that part. That simple.

                                                                                      The author’s concerns about discrimination are my concerns given I’ve watched almost every group in a dominant position down here reward members of their own group and discriminate against others. Non-whites or non-males were no exception. Their acts of racism and sexism just don’t make the news or waves on social media. Those that didn’t do this were rare, truly-inclusive folks that went out of their way to care about and understand people that were different. I love those people even if their beliefs or political moves piss me off at times. Many get along with or love me, too. I’ve learned a lot from them.

                                                                                      The tech industry, esp in Silicon Valley, is just strange to me vs what I normally encounter in general workplace. They seem to think only young, white males are capable of anything while preaching meritocracy and saying/doing anything without consequences. Then, the other haters, err activists, opposing them seem to think all white males are overprivileged people to minimize while giving opportunities and social dominance to every other group. Well, many of them even bring in just select groups (esp white women) ignoring other groups. Plus, carefully controlling speech and action in all forums with assumption every human is too weak to co-exist with those that disagree. With these factions, it’s as if there’s nothing else possible aside from these extremes despite massive number of counterexamples mostly outside of tech but also some in it. That includes the millions of minority members that seem to have a different opinion about minority or diversity issues.

                                                                                      Note: This is a tech site. I’m talking general trends. If you’re an exception, you know who you are. :)

                                                                                      So, I keep talking about it to try to shake people out of this binary, extremist thinking on opposite ends. For now, I don’t know what else to do given the beliefs are deeply social and emotional. That traditional and social media keeps putting them in bubbles seeing only people they’ll like the most or piss them off the most isn’t helping. One of best things I ever learned to do is keep people who oppose or aggravate me on social media. I watch their reactions to every hot topic, reading what evidence they post. Very enlightening. Plus, keep bringing the counterpoints in nice-as-I-can way to folks on the other side targeted to their perspective and terms rather than mine. Think on theirs carefully. I don’t what else to do about the herd or extremist mentalities many are about.

                                                                                      Btw rain1, I don’t know if you were back in time for the last thread on this but it was more interesting than most political ones. I experienced a jaw-dropping surprise or two there.

                                                                                      1. 4

                                                                                        One of best things I ever learned to do is keep people who oppose or aggravate me on social media.

                                                                                        Be careful with this. Like cultists, an entire generation of pundits have developed that take advantage of psychological weaknesses we all possess. They use the “you have to listen to all sides!” argument to claim a right to your cognition, when doing so opens yourself to manipulation via framing or even simple repetition (and if these have emotional impact, like being aggravating, they’re more effective). Listening to many sides is in general very beneficial, so you have to constantly identify if the person is arguing in bad faith or not. This can be hard to do, and I won’t offer any strategies here because they tend to be extremely personal and subjective.

                                                                                        Critical thinking doesn’t make you immune to this. At the risk of using an engineering analogy, a logically secure input parser is still susceptible to denial of service. So keep your eyes open and try to get input from a variety of sources, but make sure you understand their biases and whether they’re arguing in bad faith or not.

                                                                                        1. 3

                                                                                          “They use the “you have to listen to all sides!” argument to claim a right to your cognition, when doing so opens yourself to manipulation via framing or even simple repetition (and if these have emotional impact, like being aggravating, they’re more effective). “

                                                                                          This is a weak argument. You can always be tricked by any side, especially your own since you trust them more. The result is you still have to listen to different people. Further, you should look at evidence they present more than what you speculate about their biases, bad faith, etc. If evidence looks wrong or especially badly-intentioned, then you might start ignoring that person or group a bit more. You might still glance at their info in case something useful comes out. Totally ignore them when noise ratio is too high. That way, we get to your last sentence without censoring those that disagree with us based on bad assumptions about their motives or whatever. That’s often just ad hominem for political gain disguised as something reasonable.

                                                                                          Looking at the political stuff, the people on the left are often citing sources that are full of shit. The people on the right do that as well. I know each set of mainstream sources are intentionally biased trying to tell their audience what they want to hear to keep their advertising revenue up. For others, it might be book sales, numbers on social media, status/image, and so on. Then, there’s sources that are pretty honest with just human biases. They can get more dishonest if they get emotionally charged, though.

                                                                                          The irony of your warning is that you probably use some of those sources that are definitely operating in bad faith to support your political beliefs. I do, too, but that fits the model I just described of assuming everyone has error or agendas sifting the wheat from the chaff. For instance, I’ve read a Huffington Post article followed by a Ben Shapiro video on a topic since I knew both would have numbers useful to me. Then, I had to check every claim since both are full of shit. The good news is the bullshit itself is often repetitive since they aim for talking points that will spread virally. As in, the claims you have to fact check go down over time until getting good info out of semi-reliable sources is fairly efficient or not as bad at least.

                                                                                          1. 2

                                                                                            For helping to filter out those acting in bad faith I found it helpful to sometimes ask yourself “what if they are right” and then research on the topic. Pulling up surveys, studies, essays, etc.

                                                                                            That does make you able to recognize these arguments more easily while also increasing your literacy and giving you ammunition.

                                                                                            Generally I think it’s most beneficial if people would ask themselves that more, esp. if they are in one of the political extremes. To try to imagine what the other side thinks and feels. Empathy and understanding are something the world lacks these days.

                                                                                            (Also be careful to not throw political centrists under the bus by simply throwing out “you have to listen to all sides”, we’re usually quite nice people even if we’re not always on your side!)

                                                                                            Of course, I also feel that the most important issue is that we learn to work together more. Plenty of people disagree politically on a number of issues and work together. That can be whether or not Fiber Internet should be subsidized or not up to much more controversial statements. I don’t think such disagreements are a reason not to work together. If they bring that sentiment to work and poison the team effort by splitting the team over it, then of course, stop working with them.

                                                                                            1. 2

                                                                                              be careful to not throw political centrists under the bus by simply throwing out “you have to listen to all sides”

                                                                                              I’m saying listen to most sides, not all sides. Like the record in GEB that destroys the record player itself, our sense of fairness and aversion to hypocrisy can be exploited and destroyed with the right arguments. This happens in the real world, more often over time, and we should recognize it before we’re stuck in endless mental gymnastics trying to break out of political nihilism.

                                                                                        2. 2

                                                                                          Richard Stallman said, regarding an abortion joke in glibc

                                                                                          IIRC he was strongly of the opinion that the joke was not about abortion, but about censorship.

                                                                                          1. 1

                                                                                            I am very positive this is true. Granted, the own Weltanschauung truly reflects in one’s coding style in most cases, however, both pro-life and pro-choice advocates can have e.g. a profound desire for simplicity in their designs regardless of their opinions.

                                                                                            Things like this, in my opinion, are more rooted in self-discipline and habit, which is more or less not correlated with one’s opinion.

                                                                                          1. 2

                                                                                            I ran a separate /usr partition for a while. It’s an utter pain in the arse and I stopped doing it rather quickly. Tbh I only did it because I thought it would enable me to manage my diskspace better. It was also more complicated rather than easier.

                                                                                            /home, /pictures and / are the only partitions you need.

                                                                                            1. 1

                                                                                              /, /home, /etc, /var, /tmp for me (on zfs).

                                                                                              /tmp, /var as memory disks.

                                                                                              /etc as ro, with possibility for rw separate from /.

                                                                                              / as ro.

                                                                                              1. 3

                                                                                                Why keep /var in memory instead of on-disk? Seems like you want things like webroots and log files to persist between boots.

                                                                                                1. 1

                                                                                                  I usually install webroots under datarootdir. Most of the time I don’t mind purging logfiles at reboot, though memory disks can be backed by a file if wanted/needed, see mdconfig(8). If I want my logs to be persistent, I usually log to a remote machine (with rw /var/log).

                                                                                                  I like my systems to be as immutable as possible. I still remount when updating &c but the default is ro.

                                                                                                2. 2

                                                                                                  I need /var to persist in case I need any logfiles from previous boots, plus it can hang systemd during shutdown because it wants a place to log to. /etc is secured via some etc-git manager that pushes to my git server. Memory disks don’t count as diskspace or partitions to me.

                                                                                                  Atleast, that’s my opinion.

                                                                                              1. 1

                                                                                                Hell! Did they abolished the rule of law while nobody was looking?

                                                                                                1. 4

                                                                                                  I doubt it. It’s Germany, they do follow the law. Also, they have a history of domestic terrorism.

                                                                                                  1. 5

                                                                                                    In Germany, illegal searches are relatively common. I don’t want to say “all the time”, but regularly. They are later ruled (partially) illegal, the assets returned, and some costs paid.

                                                                                                    Searches may be illegal (and I’m dead sure this will be ruled with Zwiebelfreunde as well) because the police tends to search more then they are allowed to. Entering rooms that are not to be searched, opening cabinets they are not allowed to open, getting permissions that they are not allowed to get. The police is aware of that, but also aware that there are no repercussions for transgressions.

                                                                                                    The problem is that we have no such thing as “fruit of the poisonous tree”. The legal proceedings can still continue except in very crass cases if something “additional” is found.

                                                                                                    Sadly in german, but here’s an interview with a constitutional judge(!) on the subject, stating that many of them are violating the constitution. http://www.taz.de/!5108848/

                                                                                                    1. 1

                                                                                                      I believe you are also having some politicians vs. federal constitutional court conflict going?

                                                                                                      1. 2

                                                                                                        We regularly have, but this is not part of that. The practice I describe here is old.

                                                                                                    2. 2

                                                                                                      Sadly, we had our issues with domestic terrorism in Italy too, but we still feel the shame for the police behaviour in 2001, at Diaz school.

                                                                                                      But you cannot preserve law and security by arresting people for drawings on a whiteboard.

                                                                                                      1. 1

                                                                                                        but we still feel the shame for the police behaviour in 2001, at Diaz school.

                                                                                                        This is the first time I’m hearing of that. Wow. That’s horrible. :(

                                                                                                        1. 3

                                                                                                          I heard about it a year or two ago. Took me a while to calm down that night. They weren’t even sneaking around or trying to justify themselves like the corrupt cops often do over here. Just in-your-face, systematic brutality. That’s the exact kind of shit that we have the 2nd Amendment for. I mean, elite propaganda kept people from using it or even voting right. Still, I can’t think of any other option in a situation like that if you don’t want a pile of screaming, beat-down people in a building.

                                                                                                          1. 5

                                                                                                            Just in-your-face, systematic brutality.

                                                                                                            During cold war, in Italy, we had all sort of these things, in particular in the late 60 against students’ protests and political activists.

                                                                                                            The effect was twofold: some people were afraid to express their political opinions if they were not aligned with the Government, but it also spread radical extremism that used to justify armed war as a reaction to State’s violence.

                                                                                                            In reality, violent revolutionaries were actually useful to the US aligned government to justify repression against the pacific but effective political culture of the left. So much that in 2001, members of the police were sent among manifestants as “black blocks” that launched molotov against civil buildings in the streets and against police to justify the repression.

                                                                                                            This is why in Italy we do not consider arming civilians an option against the power: because trained cop are more effective and better armed anyway but if protestants are armed and dangerous you can justify any sort of repression.

                                                                                                    3. 4

                                                                                                      Unlikely, from what I gathered, this is a search warrant for witnesses. Additionally taking all equipment that looks vaguely like computers and CD ROMs isn’t that unusual, police officers are sadly not that trained in this direction, some of them have trouble operating computers (a fellow student in my CS courses has taken part in a “computer course” for the police which largely consisted of the bare minimum of excel and word usage). It’s not the first time something like this happens (there are various accounts of this happening in the past, for example, a CCC member having their home equipment taken even though the warrant said “take the server the stuff happened on” and the server was in another datacenter).

                                                                                                      The requirements for being a police officer in germany don’t intersect well with having basic knowledge of computers.

                                                                                                    1. 2

                                                                                                      Oh, wow that’s pretty neat. I might install an instance somewhere on my server…

                                                                                                      Super awesome work!

                                                                                                      1. 3

                                                                                                        Thanks! In case it’s useful, here’s the full /etc/nixos/configuration.nix for the AWS instance.

                                                                                                      1. 12

                                                                                                        As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.

                                                                                                        It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.

                                                                                                        I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.

                                                                                                        @Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.

                                                                                                        1. 14

                                                                                                          Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).

                                                                                                          With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.

                                                                                                          EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.

                                                                                                          1. 9

                                                                                                            Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.

                                                                                                            For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                                                                                            1. 8

                                                                                                              I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                                                                                              I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.

                                                                                                              1. 2

                                                                                                                For me, I like the ability to plan when I will solve a problem.

                                                                                                                But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.

                                                                                                                And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.

                                                                                                                On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.

                                                                                                                1. 2

                                                                                                                  I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                                                                                  And if an update break things, I can also roll back from that update until I have time to fix things.

                                                                                                                  Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.

                                                                                                                  1. 1

                                                                                                                    I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                                                                                    Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.

                                                                                                                    Several people here said that Arch doesn’t really support rollback

                                                                                                                    It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                                                                                    1. 1

                                                                                                                      It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                                                                                      Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.

                                                                                                                      Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).

                                                                                                                      1. 1

                                                                                                                        I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:

                                                                                                                        $ sudo pacman -Syu
                                                                                                                        ... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
                                                                                                                        ... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
                                                                                                                        $ ls /var/cache/pacman/pkg | rg postgres
                                                                                                                        ... ah, postgresql-x.(y-1) is sitting right there
                                                                                                                        $ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
                                                                                                                        $ sudo systemctl start postgres
                                                                                                                        ... it's alive!
                                                                                                                        

                                                                                                                        This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages

                                                                                                                        My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.

                                                                                                                        (Take my claims with a grain of salt. I am a mere pacman user, not an expert.)

                                                                                                                        EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date

                                                                                                          2. 12

                                                                                                            now pacman Syu is almost guaranteed to break or change something for the worse

                                                                                                            I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).

                                                                                                            I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.

                                                                                                            1. 3

                                                                                                              I have the opposite experience

                                                                                                              I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.

                                                                                                              1. 2

                                                                                                                I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                                                                                I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.

                                                                                                                1. 3

                                                                                                                  Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                                                                                  It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.

                                                                                                                2. 1

                                                                                                                  That’s entirely possible.

                                                                                                              2. 4

                                                                                                                I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)

                                                                                                                1. 1

                                                                                                                  Things like Nix even allow rolling back from almost all user configuration errors.

                                                                                                                  1. 3

                                                                                                                    Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.

                                                                                                                2. 3

                                                                                                                  How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.

                                                                                                                  I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.

                                                                                                                  As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.

                                                                                                                  1. 2

                                                                                                                    How do you configure ZFS boot environments with Arch? Or do you just mean snapshots?

                                                                                                                    1. 3

                                                                                                                      I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.

                                                                                                                      It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.

                                                                                                                      1. 2

                                                                                                                        Looks really useful. Might contribute a plugin for rEFInd at some point :-)

                                                                                                                        1. 1

                                                                                                                          Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.

                                                                                                                          It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.

                                                                                                                1. 6

                                                                                                                  What are key differences between ActivityPub and RSS+WebSub/pubsubhubbub? Its spec is long and verbose, I can’t tell at a glance what does it represent. I see that it supports likes and subscription lists, but what are other differences to RSS? Does it support comments? Is it just for twitter-like websites, or suitable for blogs and reddit-like websites too?

                                                                                                                  What I like in ActivityPub is that it’s RDF-based. It’s cool technology based on romantic ideas of expert systems, Prolog, rule-based AI, etc.

                                                                                                                  1. 7

                                                                                                                    Besides mastodon and pleroma which are twitter clones using ActivityPub, there’s also peertube for videos, PixelFed for images and Plume for blogging. Those projects are all pretty new though, so it’s too early to say whether ActivityPub works well for this kind of stuff, but it looks promising imo

                                                                                                                    1. 1

                                                                                                                      RSS+Webmention can be used to realize a very rudamentary version of federation (I’m currently developing a platform and testing simple federation using RSS+WM).

                                                                                                                      However, ActivityPub allows much more versatility and provides the endpoint with a low-overhead, machine-readable version of actions (AP is not like RSS in that stuff operates as feeds, rather, it’s actors doing actions on other actors or objects)

                                                                                                                    1. 1

                                                                                                                      I think we should stop doing <Adjective> Code. Just write code. As long as code works to the specification, all solutions are equally valid and “good/clean/compassionate”. I don’t like when people preach me TDD or Unit tests or Mocking or Object Orientation or Functional or Formally Verified or anything.

                                                                                                                      I’ll write using TDD when I feel it necessary. Same for all the other previously mentioned methods. I’m an engineer and I will not use one tool, like any of the above, for all my problems. I use a hammer for the nails, a screwdriver for the screws and a CNC for the metal parts.

                                                                                                                      The right tools for the right job.

                                                                                                                      1. 2

                                                                                                                        The problem with “right tool for the job” is that as a platitude it contains no information so can be used to defend any position.

                                                                                                                        You want to avoid learning anything new? Stick with 1970s tech, it’s the right tool for the job.

                                                                                                                        You want to switch our core systems to that thing you read on hackernoon an hour ago? Go for it, it’s the right tool for the job.

                                                                                                                        You think you should TDD all the time? Well it’s the right tool for the job, so go ahead.

                                                                                                                        You think you should avoid TDD and get on with coding? Sure, that’s the right tool for the job.

                                                                                                                        1. 1

                                                                                                                          That’s quite intentional.

                                                                                                                          There is no one true right tool for the job because the right tool for the job in programming is partially subjective. Additionally learning something new isn’t coding. It’s learning. It also does not exclude simply experimenting with new tech either.