Threads for Mirabellette

    1. 1

      It has been about a year now. Is this guide still valid?

      1. 1

        It is, very few things change and I updated it.

    2. 1

      Why use easyrsa over, for example, GnuPG, which is probably already installed?

      1. 1

        I use easyrsa because it is the tool recommended by OpenVPN and developped by them. It is also really easy to use even if it does not provide the best algorithms available.

    3. 2

      The --auth option is not used when you use --tls-crypt and AES-256-GCM according to the OpenVPN manual page. What I did is set its value to none.

      1. 1

        It is perfectly true, thank your for your comment, I edited the configuration.

    4. 2

      The CRL procedure should be documented also. I came across a team who assumed that deleting the certification from “key” directory was enough to lock out users. Additionally an explanation of the meaning of flags found in the “index.txt” would be nice :-)

      Addendum: another part that I find missing and misunderstood by admin teams is the”proper” way of certification creation. Users should issue a CSR and have it signed. Most real world uses I came across, the admin just issues everything and sends the OVPN file. Very common anti-pattern.

      I wanted to write an article about the entire lifecycle of openvpn and user management. Yours is close though.

      Nice job nevertheless!

      1. 2

        It is now documented, thank you.

        1. 1

          bookmarked, thanks for sharing! :)

      2. 2

        I totally agree, I will update it when I have some time. Thank you for your comment :)

    5. 7

      I’ve used OpenVPN for years, but the whole setup and maintenance process looks outdated. Recently started using Wireguard[1] in production which is quick to set up and hardly requires any maintenance. It also works well in containerized world.

      [1] - https://www.wireguard.com

      1. 3

        Unfortunately there’s not yet kernel support for Wireguard, so maintenance is higher than it would be otherwise (e.g. dependence upon wireguard maintainer keeping out of tree module up to date for latest kernels, and you (or your distro) having to build it out of tree).

        It seems like he’s close to getting it merged though, so I’m holding out for that!

        1. 4

          wireguard-go might also be usable, if maximum performance is not required. I wouldn’t think it would be any slower than openvpn, which is also user-space.

        2. 1

          Most distributions provide a kernel module since years. The burden of maintaining such a tiny piece of code is moderate and it does not impact admin and end users.

          1. 1

            Maybe in this one specific case, but I’ve been burned in the past by relying on an out of tree module (or set of patches), only to have the developer lose interest, sell out, whatever. (e.g. grsec). It’s rare I suppose, but the burden on users and admins is high when it does happen.

            (the same fate could happen to patches/modules in the tree, but it’s much more rare)

      2. 1

        I do not think you can compare now Wireguard and OpenVPN in term of reliability. Wireguard is still something new, does not be audited by security team yet and does not have a strong maintainability process. A little quote from the authors:

        As of June 2018 the developers of WireGuard advise treating the code and protocol as experimental, and caution that they have not yet achieved a stable release compatible with CVE tracking of any security vulnerabilities that may be discovered.[7][8]

        1. 3

          WireGuard has received formal verification from the developers [1], audited by [2], and reviewed by kernel developers and distributions that ship the kernel module. I don’t have numbers on the number of reviewers versus SLOC count but I suspect it could be much higher than OpenVPN given the size of WireGuard

          [1] https://www.wireguard.com/papers/wireguard-formal-verification.pdf

          [2] https://courses.csail.mit.edu/6.857/2018/project/He-Xu-Xu-WireGuard.pdf

        2. 2

          They’re saying that because they’re trustworthy, security folks. We always advise to say don’t trust it until proven otherwise with strong review and/or verification. There’s been some impressive results in verifying Wireguard on top of the fact that it’s so much smaller than competing implementations.

          For now, I’ll just give you this article for some nice comparisons. Also, that article says OpenVPN is about 600,000 lines of code. The most-secure systems were thousands to tens of thousands of lines of code because smaller systems are easier to bulletproof. I don’t need to look at OpenVPN’s security advisories to know it will have more errors with more complexity.

    6. 11

      Thanks for the nice and complete write-up!

      I noticed a few minor issues with the server and client configuration files:

      • You might want to set up a CRL (certificate revocation list) and use the crl-verify directive in the server config file to revoke client certificates in case of a compromise.
      • OpenVPN 2.4.0 ships with with the new compress option lz4-v2, which is undocumented. It seems to use less CPU, drain less power on mobile devices, and it possibly has a higher throughput according to this ticket.
      • There’s no need to specify push "compress lz4" in the server config file if the client config file has its own compress directive.
      • There’s no need to specify a key-direction (0 or 1) after tls-crypt’s keyfile path according to the manpage: “In contrast to –tls-auth, –tls-crypt does not require the user to set –key-direction.”

      I usually bundle all the certificates and keys along with the client configuration in a .ovpn file, which I find easier to transfer around and use.

      1. 2

        Thank you very much for your feedback! I will update the article when I will have some time.

        About the compression part, I learnt it is now considered as unsecured thanks to voracle vulnerability, I will probably explain it briefly and disable in the configuration offered because security is the first criteria.

        Thank you very much for your feedback again :)

        1. 0

          The article is now edited, I added a lot of things, feel free to read it again.

    7. 2

      Can someone ELI5 why Firefox is not to be trusted anymore?

      1. 4

        They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

        Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

        They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

        1. 4

          Some of this isn’t true.

          1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
          2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
          3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
          1. 3
            1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

            2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

            3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

            But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

        2. 4

          But really their entire revenue stream comes directly from Google.

          To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

          1. 1

            Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

            1. 2

              And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

              People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

              People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

              1. 2

                Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

            2. 1

              That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

      2. 2

        You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

        Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

        You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

        1. 3

          Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

          1. 1

            They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

            1. 4

              Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

              Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

              The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

              The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

              This of course assuming that your password is not ‘hunter2’.

              It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

              1. 1

                “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                1. 1

                  In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                  1. 1

                    That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

              2. 1

                As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

            2. 2

              How do they encrypt it?

              On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

            3. 2

              You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

              Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

        2. 2

          The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

          You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

          Please, no FUD. :)

          1. 3

            move to Clouflare

            It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

    8. 6

      This happens with a lot of bigger “open source” projects. Basically code dumps without any consideration for making it easy to deploy it by someone who’s not a developer and intimately familiar with the code base. I guess as good a definition of “devops” as any.

      The easiest to improve this is to get it properly packaged for the most popular Linux distributions. This is a great QA tool as well. If your software (and its dependencies) are easy to package it means you have your dependencies under control, and you can build the software from source at any point.

      Unfortunately nowadays you can be happy to get any of those projects to even work in a provider Docker container. Running them as a production service yourself is a completely different story, and practically impossible.

      1. 21

        To be fair, I don’t think every piece of open source is really meant to be run as-is by users so much as “here’s what we use to run this, you can build off it if you want.” It’s perfectly fair to release your internal tools for the purposes of knowledge sharing and reference without intending to support public consumption of them by non-developers.

        Further, it looks like the author made minimal effort to actually use what is distributed as it was meant to be:

        • The suggested route of using docker images wasn’t used. Wanting to understand something and be able to run it without a pre-built image is fine, but totally skipping the intended usage and trying to roll your own from the beginning is only likely to make the system harder to understand.
        • The projects appear to provide release packages, yet he pulled the source directly from git, and at whatever state master happened to be in at the time, rather than a branch or tag. At least one of them looks to be failing CI in its current state, so it’s not even clear that what he had was a correctly functioning version to start with.
        • He’s ignored the npm-shrinkwrap provided and automatically upgraded dependencies without any indication or testing to confirm that they will work. While it would be great to think that this wouldn’t be an issue, the state of npm is not such that this is a realistic expectation.
        1. 3

          Where is the purpose of knowledge sharing when you do not make things in order to be understandable? Knowledge sharing is not just take my stuff and understand. You have to make it understandable and be sure the person understand it well. That’s why you have wiki and documentation, to facilitate the understanding.

          Where do you find the suggested route is Docker ? You know, when you try to deploy something, you begin by somewhere. I read the install part of one the repository, Firefox Accounts Server, as I do for a lot of application I install and I followed the process. The written process is git clone and npm install. After some research, I discovered there was not an unique repository needed but there are several links together. Where is it written? How can I suppose to know?

          You can’t say I did minimal effort when I took so much time on it. I am used to deploy stuff and configure application. I configure by my own each microservice in order to make it works. The problem was after three days, I still have to guess things by my own, understand it, configure it properly and fix issues I got. I am sorry but it is too much. It is not my job to make the application easily understandable and easy to deploy. It is the job of maintainers and it is what I said in my blog post.

          When I compare other applications I deployed, and some of them are bigger than this, FXA has a lot of work to do. The master branch is actually a development branch, there is no master branch and the documentation told you to pull from it to deploy in production. :o

          Here we have just a big thing which is our stuff, deal with it and make it workable if you want. I made a try and failed. It is not suppose to be deploy by someone who is not working in this project full time. That’s all and it is questionable when it is a Mozilla Foundation and who is publicly saying that privacy matter.

          1. 5

            Knowledge sharing is not just take my stuff and understand. You have to make it understandable and be sure the person understand it well.

            Your opinion is not universal.

            Different cultures handle this problem differently. Some culture/language pairings are listener-responsible and some are writer-responsible.

            1. 2

              Clearly. I think the best way is to have writer and listener responsible but it is not the debate here I guess.

              1. 0

                I’m in agreement. Can’t understand how people can defend bad practice of the art. Why would anyone who cares about good work defend anything like this? It’s like they are working against themselves, karmic-ally setting themselves up for a later fall through someone else’s failing, … for no sensible gain.

            2. 1

              Shouldn’t they both be “responsible”? And please tell me which cultures? Are we talking professional vs unprofessional, or what? I’ve worked in hundreds of different cultures worldwide over many decades, and I’ve never seen a claim like this.

              1. 2

                Come to Asia, never cease being frustrated.

                1. 1

                  Been there. Once I understood “hectoring”, learned pretty quick how to generate a larger, louder response.

      2. 4

        Here is some more concrete documentation that I found:

        Run your own Firefox Accounts Server https://mozilla-services.readthedocs.io/en/latest/howtos/run-fxa.html

        Self Hosting via Docker https://github.com/michielbdejong/fxa-self-hosting/blob/master/README.md

        Installing Firefox Accounts https://webplatform.github.io/docs/WPD/Infrastructure/procedures/Installing_and_theme_Firefox_accounts/

        The ‘code dump’ argument is a bit odd .. these projects are all super accessible and being developed in the open on Github. No project is perfect. If you think something specific, like self-hosting documentation, is missing, file a bug or make the investment and work on it together with the devs. Open source is not one way.

    9. 3

      I tried to install the Firefox Sync Server and failed at the same level…

      1. 2

        Firefox Sync Server is independent to Firefox Account Server. I created a tutorial to deploy it and I currently using an instance I host in my own server. You can find the link to the tutorial here

        1. 1

          Oh, I know, but I meant like it’s a bit undocumented and not an easy thing to deploy (at least I couldn’t). Thank you for your tutorial, I’ll take a look today and give it a try!

    10. 4

      A lot of people recommended to store data in Google, Microsoft or Amazon cloud services. When you asked them about privacy, they just replied by encrypting it and it is ok. I am sorry, but it is not ok, not ok at all. Do you really think a file encrypted with nowadays technology could resisted in 5 years, 10 years or 20 years to the technology improvement?

      I think it’s reasonable to assume that e.g. AES won’t be broken in 10 years. (and IIRC quantum attacks are against public key crypto, not symmetric.)

      An actual concern with the cloud services would be trusting them to do server side encryption. Don’t :) This is a feature for compliance, when stuff “must be encrypted” but you don’t care about not trusting the service.

      1. 1

        I read multiple papers who said it is very secure to use AES-256 for encryption and we are far, if I trust them to break AES-256. However, who could really know about that except except scientist or government agency?

        However, for me, when the task is not easy, mistakes could appears and expert needs just one weakness to break you in. It is not easy to encrypt something with the highest security parameter and using the best practice. You also need to have a correct management of the key which could be compromised.

        For example:

        • encrypt it with low security parameter (small key size, not the best encryption algorithm chose)
        • vulnerability or back door in the software which implement it
        • during the key generation, entropy wasn’t big enough
        • compromised system which encrypts the files

        And with time:

        • increase of the power of computers
        • improvement of technology
        • improvement of mathematics
    11. 1

      The website wasn’t accessible for one day. It is fix now, sorry for the disturbance.

    12. 2

      Pretty nifty. Are automatic updates the main benefit of this over /etc/hosts?

      1. 2

        Thank you!

        The main interest of this set up is that you can use your own DNS server with all your devices. That means configure it only once. If you want to do it on /etc/hosts, you have to do it for each of them. Sometime you can’t do it, for example on a non-root Android Smartphone

    13. 2

      I do the same with a raspberry pi and dnsmasq on my home network. I am not using pi-hole, since I am not a fan of these curl | bash installers + I do not need the web-interface.

      1. 1

        I was thinking about Pi-hole for a time but I heard that it is not so clean that it looks like. Only install what you really need and it is better if you can exactly understand what it does.

        It is a good advice, thank you for sharing.

    14. 1

      Miraki thinks this is malware-related (probably some dumb work setting), but also Firefox is kvetching about SSL.

      1. 1

        Hello friendlysock,

        Could you please tell more about that by private message? How do you get the report?

        Thank you very much, Mirabellette

    15. 1

      If you want a simple, easy and fast ebook reader for .epub files (.mobi sadly aren’t support), I can really recommend mupdf.

      1. 2

        Thank you for your advice, however, I prefer Calibre which is on GPLv3 contrary to mupdf which is owned by Artiflex company.

        1. 1

          MuPDF is indeed copyrighted by Artifex Software Inc., but it is released under AGPLv3.