1. 1

    I’d be interested to see how git fits into this workflow. If code lives on a remote VPS, what’s the most efficient process of committing your changes and pushing a new branch.

    1. 2
      1. Install git on the remote machine
      2. Set up your git config on the remote machine
      3. Perform git commands on the remote machine while you work in VS Code

      When you’re using VS Code’s remote extension, any terminals you open are ssh sessions on the remote machine, so step 3 can still be done without leaving your local VS Code window.

      1. 1

        You forgot step 2.5:

        Compromise your keys or other authentication data by copying them to an unencrypted remote VM.

        Now your hosting provider can push to the branches that autodeploy to production.

        1. 3

          Alternatively:

          1. Don’t put any private keys on the VM
          2. Use the VM as a git remote
          3. When you’re ready to push to branches, pull the code from the remote onto your better-protected machine and push from there…
          1. 1

            I use a hardware key for authentication that it is connected to my local machine. Thanks to ssh -A I can use that key on my remote machines. It is amazing ;)

          2. 1

            Yes, that’s exactly how I do it.

        1. 1

          mine :) monopati

          I just wanted a very simplistic static content generator.

          1. 33

            Our main mistake was that we did not live up to our own core value of collaboration by including our users, contributors, and customers in the strategy discussion

            No, your main mistake was to decide to impose surveillance on your users, not that you forgot to ask them for permission to track them.

            I’m rather cynical about the whole thing. I think Gitlab is going to try again later, just in a more sneaky way.

            1. 2

              So if they would have asked and an overwhelming majority would have said ‘yes, please do and use the data to improve my experience’, then it would still be wrong?

              1. 11

                It would be, yes, because you can phrase the question in a way that most people will say yes while still doing the exact same thing. Manipulating people into accepting surveillance is also wrong and laws like GDPR at least try to prevent it in some way.

                1. 2

                  “manufactured consent” being the literal industry term.

                2. 4

                  Also consider the opposite scenario. What if the majority has replied “yes, that’s fine”. Would it be ok for them to impose surveillance even to the minority that doesn’t want to get tracked?

                  1. 4

                    They could collect it in a way that could only be used to improve the user experience. They might anonymize what they can, put a time limit on retention, host it on secure servers that get it through one-way link, have a NDA in place, etc. They can make it easy to disable or, even better, opt-in. There’s a lot that can be done to collect data on users while mitigating non-government risks that data brings in.

                    The companies doing telemetry virtually never do anything like that, though. That means that, at best, they’re apathetic to any damage the data will cause their users (i.e. externality). At worst, they are going to or will later sell their users out to 3rd-parties that specialize in targeting them in ways they don’t want. Sometimes maliciously but mostly annoying. Any venture-backed company has financial incentive to squeeze every bit of value they can out of their users. Surveillance should always be considered evil in such scenarios given it will inevitably lead to the consequences I described.

                    If they do it, they should have safeguards like I described. Heck, it would even make them look more trustworthy as a brand if users tolerated the telemetry to begin with. The situation might have gone totally the other way if Gitlab did it openly with legal and technical protections like I described. They should still make it optional for those that want zero tracking, though.

                    1. 4

                      The only way to safeguard user’s privacy and anonymity is to not collect the information in the first place. We have seen all sorts of databases abused, even those that people took the effort to anonymise in one way or another.

                      1. 1

                        I agree!

                    2. 2

                      Sure, yeah. Ethical behaviour and mass approval of behaviour are different things

                  1. 7

                    As it stands I trust this significantly less than Cloudflare and Google. There is no privacy policy on the website and there is no mention of any legal entity behind the service so, if you use it, you

                    1. have no idea what they do with your data (and remember, persistent SSL cookies means that if you use DoH, then you can be tracked across networks!)
                    2. have no idea who to hold responsible for infringements on your privacy.
                    1. 3

                      We have added a privacy section to the DNS webpage, because indeed it could be difficult for people to extract than information from our generic terms page.

                      1. 3

                        I wouldn’t trust google with my data no matter what privacy policy they would be claiming. In the end those policies dont mean much. Better use common sense and instincts. These guys seems to me to be small scale in the “fed up with big corp” departement. At least for now ;)

                        1. 2

                          There is no privacy policy on the website

                          I was also looking and found this.

                          1. 2

                            Yes, I saw that as well, but discounted it as it’s on a separate website and

                            • there is no mention of when those terms were last updated and
                            • there is no mention of who controls these services so there is no one to hold liable for any violation of those terms.

                            EDIT: I’m sure the (AFAICT 3) people running this service are well-intentioned, but my point is that, unless you personally know these people, then you can trust them about as much as you can trust 3 randos coming up to you in public to offer you free candy.

                        1. 4

                          Who are the folks running this?

                          1. 11

                            We are a group of Open Source hackers. We value privacy, and we setup services that we find useful. Partially to scratch our own itch. But also because we feel that we should do our part and offer alternatives to Corporate centralization.

                            1. 5

                              https://libreops.cc/about.html ?

                              Looks like a group is libreho.st

                            1. 22

                              I don’t see why I should try my ISP more than Cloudflare. My ISP is using DNS domain blocking for country-wide censorship.

                              My concern with Cloudflare is centralization. But fortunately, DoH has started gaining some traction and more providers start to offer it out there.

                              1. 13

                                If you’re American, there’s probably no reason to trust your ISP more than cloudflare. However, most people are not American. For us, it’s kind of shitty if browsers start sending every query to an American megacorp which follows American privacy laws and is in a country where we have no legal power.

                                I can expect my ISP to follow Norwegian privacy laws, which I’d bet are way better than American. If my ISP does something illegal according to our privacy laws, I have way more power to stop them than if an American company does the same.

                                I know this will all be user configurable, but if it gets enabled by default, and it defaults to cloudflare, most people won’t change that. Most people won’t know DoH is a thing, much less the nuance regarding why they may or may not want to change the setting.

                                1. 7

                                  Is Cloudflare going to be the default for the rest of the world? Mozilla is only rolling it out to the US, as this article even mentions. I haven’t seen any announcement what the plan is for the rest of the world, if there is any.

                                  1. 1

                                    If Mozilla is only rolling it out to the US, and never ends up rolling it out to the rest of the world, then yeah, my points don’t matter. However, I haven’t heard any statement that they won’t ever make DoH with cloudflare the default for the rest of the world, just that they’re not doing it yet.

                                    1. 2

                                      They have talked about having different regional defaults, and including more preset options in the dropdown for configuring DoH. This hasn’t happened yet, though.

                                  2. 10

                                    I’m not an American either. In my country (Greece) authorities can order ISPs to block certain domains on a DNS level, without any due process. And ISPs comply. DoH is the most user-friendly way for many people to access these websites.

                                    1. 2

                                      If cloudflare operates in that country, they would still have to comply with local laws, no? Just like all other services have to.

                                      1. 1

                                        Nope. Cloudflare is not consider an ISP for that country, so no need to comply. Same applies to any other public DNS service (eg. Google).

                                      2. 1

                                        If the blocking is only on a dns-level, it’s not much of a blocking method. It should redirect all traffic based on certain IP-numbers as well, which kind of defeats the purpose of the whole DoH endeavor.

                                        1. 2

                                          It’s a silly blocking method indeed, but it’s effective for the majority of the users who don’t know how to switch DNS settings on their systems. IP blocking is also ineffective, because IPs often change ownership.

                                  1. 8

                                    The book is also distributed in the Anarchist Library

                                    1. 2

                                      So the excerpt is the middle of chapter 6 not 7.

                                      1. 0

                                        Is that copyright infringement?

                                        1. 2

                                          Something tells me that Anarchists would neither know nor care.

                                          1. 0

                                            Perhaps, I guess the bit I didn’t ask was “is it okay to link to potentially copyright-infringing content in a Lobsters comment?”. I couldn’t find anywhere that the author (or the publisher) have said that this kind of distribution is okay, but that doesn’t mean they didn’t.

                                            1. 1
                                              1. 1

                                                Worst case is we get a take-down notice followed by a Lobsters policy on that stuff. If we err too much on caution, then SciHub is off limits here as well. That’s why I never brought any of it up here.

                                                1. 1

                                                  I’m going to suspect that it’s probably against the rules unless it is somehow legal.

                                          1. 3

                                            I’m @comzeradd@libretooth.gr, I’m also admin there. It’s a fresh Mastodon instance we setup recently.

                                              1. 5

                                                Lesson learned: If SafeHouse was Open Source none of this would have happened.

                                                1. 9

                                                  Considering this is an awesome story to tell at gatherings and he got a free coffee mug, I don’t know that it’s a good argument in favor of Open Source. ;)

                                                  1. 2

                                                    yeah, just kidding :)

                                                1. 2

                                                  I’m curious about choosing AWS. Don’t they charge traffic?

                                                  1. 3

                                                    In the article he mentions that it’s just to browse some gov / bank websites and avoid triggering IP warnings (which doesn’t always work as Cloud provider IPs are also classified due to scraping). Traffic is only expensive when watching BBC / Netflix.

                                                    1. 3

                                                      Yes AWS charges for traffic, that is actually covered in the “So… What Does It Cost?” section. Is the post tl;dr?

                                                      1. 1

                                                        We do but for a low usage VPN it’s trivial to the point of fading into the white noise. I ran an Algo VPN in EC2 and used ot extensively for all my personal work for a few months and the costs incurred are negligible. (I don’t speak for my employer, yada. yada.)

                                                      1. 7

                                                        I am not surprised at all that XMPP has way more true checks than the other options.

                                                        1. 1

                                                          Except the most important thing “E2E by default”.

                                                          1. 3

                                                            Conversations.im, arguably the most popular XMPP client, has E2E by default.

                                                            1. 1

                                                              True. Probably the best mobile XMPP client. But not all clients have get on board the OMEMO ship yet.

                                                              1. 1

                                                                Yes, you probably know it but for a wider audience one can track OMEMO deployment progress on https://omemo.top

                                                                On desktop I’d recommend Gajim, it still looks a little bit dated but it supports all modern XEPs and is actively maintained.

                                                          1. 2

                                                            The most important feature, regarding security, is “E2E by default”. Encryption exists either by default, or it doesn’t exist at all. And this is currently the biggest failure of XMPP.

                                                            1. 1

                                                              Any document on this subject that fails to mention the Thinkpad Lenovo X2?? series which are epically robust and maintainable and available with reasonably powerful specs didn’t do their research well. (ps hey @comzeradd)

                                                              1. 1

                                                                Current ones are also less serviceable than early models (X200, X201, X220, X230 – I had all of these) in favor of thinner design and “modern look” which no one asked for.

                                                                1. 1

                                                                  The report is based on “best-selling” products, so I guess X2?? didn’t make the cut as not that popular as we may think it is. It also refers to new models, so X280 maybe not that repairable as the legendary X220.

                                                                1. 1

                                                                  https is slower than http by about 30%

                                                                  That’s not accurate. Most modern web server software use http/2 for https, which is faster that plain http.

                                                                  1. 4

                                                                    HTTP2 allows you to interleave and pre-send resources, which can make web pages faster, but package managers don’t benefit from any of that. It would probably be slower.

                                                                    1. 1

                                                                      Keep in mind the context. This is specific to OpenBSD’s ftp(1) tool.

                                                                    1. 2

                                                                      It would be nice if keyservers with a more “modern” approach had a wider adoption or mail providers start implement wkd.

                                                                      1. 1

                                                                        Yeah the difference it would make to have modernized keyservers would be massive, and would be greatly more appealing to people in general.

                                                                      1. 2

                                                                        This is a nice post that debunks some of the myths and FUD against SystemD http://0pointer.de/blog/projects/the-biggest-myths.html

                                                                          1. 3

                                                                            I tried Duplicity with GPG but sadly I found it lacking, even for rarely looked at archives. I eventually moved to restic and it works splendidly.

                                                                            1. 3

                                                                              I also do backups using restic against a cloud storage (in my case a Ceph cluster), this has two advantages:

                                                                              1. backups are stored redundantly
                                                                              2. restic backups against an HTTP endpoint are much faster than over SSH
                                                                              1. 2

                                                                                My biggest complaints about restic are the lack of access controls and slow pruning of data. Perhaps those may be fixed one day.

                                                                                1. 2

                                                                                  What were you missing from duplicity?

                                                                                  1. 5

                                                                                    Not the OP, but the fact that you can’t delete intermediate incremental backups is pretty bad… Pruning is a pretty key aspect of most backup strategies (as I want daily going back N days, weekly going back N weeks, monthly going back N months, etc). Also, duplicity would run out of memory for me (but restic would too – I eventually settled on the free-to-use-but-not-free-software duplicacy, as I wrote about https://dbp.io/essays/2018-01-01-home-backups.html – some more details about the OOM stuff on the lobsters thread https://lobste.rs/s/pee9vl/cheap_home_backups )

                                                                                    1. 3

                                                                                      For one, being able to restore single files without scanning through the archive. The duplicity guys do know about the problems with using tar, but I don’t know when they’ll be able to move away from it.

                                                                                      1. 3

                                                                                        Are you sure this is not possible with using –file-to-restore ?

                                                                                        1. 2

                                                                                          I’m not 100% sure, I’m just going by my limited knowledge of the tar format and what my link says:

                                                                                          Not seek()able inside container: Because tar does not support encryption/compression on the inside of archives, tarballs nowadays are usually post-processed with gzip or similar. However, once compressed, the tar archive becomes opaque, and it is impossible to seek around inside. Thus if you want to extract the last file in a huge .tar.gz or .tar.gpg archive, it is necessary to read through all the prior data in the archive and discard it!

                                                                                          My guess is that –file-to-restore has to search for the file in the .tar.gz. If you find otherwise, I’d be interested to know!