1. 10
  1.  

  2. 2

    if there wouldn’t be flathub, we could’ve ported chocolatey for easier software installation :)

    1. -4

      Before the reinstallation I did a backup of my entire /home directory. So after I reinstalled all the Flatpak applications I had installed before, I could simply restore almost all files and configurations of the Flatpak apps by copying the .var folder from the backup without having to set up everything from scratch. Flatpak creates a virtual file system in this folder for each application.

      If I had installed all programs manually via the package management of the Linux distribution, it would not have been enough to simply copy a folder to restore all configurations. I would have had to copy many different folders.

      How is it possible to be so wrong in every single sentence? :-/

      1. 10

        Is it really necessary to phrase your criticism like that? @jlelse is sharing their opinion, and you replying in this way does little more than stop newcomers in the industry from being comfortable to contribute.

        You can see on their bio that they’re a student, would you say something like this to a junior or grad hire where you work? I really hope not.

        1. 1

          Thanks!

          1. 2

            No worries! Don’t let negative comments stop you writing and sharing.

            Nice to see your post about Gitea, I’ll probably be setting something like that up for myself soon!

            1. 2

              I’ve been sharing my opinions online for several years now and have received enough criticism (some of it more constructive than this) and it hasn’t stopped me, but rather had the opposite effect. 😅

              Gitea is really a nice tool, have fun! 😌

              1. 2

                some of it more constructive than this

                lol sounds like you’ve been pretty lucky then 🤣

                Thanks! It is! I had one before, then replaced it with self hosted GitLab. Kinda want to go back now haha

        2. 6

          Could you please explain what’s wrong?

          1. 4

            Before the reinstallation I did a backup of my entire /home directory.

            Needless backup of caches – there is no point in backing up all of $HOME.

            So after I reinstalled all the Flatpak applications I had installed before, I could simply restore almost all files and configurations of the Flatpak apps by copying the .var folder from the backup without having to set up everything from scratch.

            So the caches just got copied back again, thanks to Flatpak’s insane violation of the XDG base directory standard.

            Flatpak creates a virtual file system in this folder for each application.

            Flatpak needs to stop violating the XDG base directory rules.

            If I had installed all programs manually via the package management of the Linux distribution, it would not have been enough to simply copy a folder to restore all configurations.

            There would have been no difference between “normal” applications and Flatpak apps, considering the whole $HOME dir was backed up. Which is needless, as backing up .config (and perhaps .local) should be enough for non-Flatpak applications.

            I would have had to copy many different folders.

            You would have to copy .config (and perhaps .local) – which you did anyway by backing up all of $HOME.

            Even if you did only back up .var, you would still have needlessly copied cache folders, as Flatpak’s make it impractical to avoid copying cache directories, thanks to their “virtual file system”.


            So it comes to either

            • backing up .config (and perhaps .local) for “normal applications”, or
            • backing up .var, then manually excluding cache directories inside it.
            1. 3

              What about ~/.mozilla for example?

              1. 1

                There is a bug report for that.

                Generally I try to minimize the amount of applications that ignore the rules of the operating system they run on.

                I did this by making $HOME read-only and picking alternatives that follow the rules.

              2. 5

                Not to mention the bit at the bottom about distros’ package distributions.

                Sure thing, lets move everything to incredibly slow, memory and cpu hungry containers so that we can rely on people we barely trust not to be malicious whose main priority is having the fanciest new features to keep on top of security vulnerabilities, rather than the people whose sole interest is producing a stable and secure system.

                This bad-reinvention-of-static-linking fad boggles the mind.

                1. 1

                  What’s the cpu overhead of a docker container? It’s not literally zero (in most there’s an extra check executed) but that still seems like a bizarrely claim.

                  Memory overhead makes slightly more sense as there’s less scope to reuse dynamic library pages, bit that applies equally to non-container static linking.

                  This leaves me with the impression that you don’t actually understand the things you talk about. Containers have plenty of flaws without inventing problems that don’t exist.

                  1. -1

                    ? I think anyone that has ever touched a ‘container’ find them to be cpu and memory hogs. Not sure how that is even debatable.

                    The comment about them being malicious is imo understated. Most containers in popular registries are filled with vulnerabilities if not outright malware.

                    1. 5

                      That’s pretty debatable actually, and depends heavily of your host OS. On Linux, a container is nothing more than a process with a bunch of security policies set. Performance wise, they behave exactly like a normal process.

                      1. 1

                        not really - anyone using containers in prod will be using an orchestrator such as k8s which adds serious overheard

                        1. 1

                          Right, but that’s the orchestration and networking abstractions being heavy. Containers themselves are pretty lightweight. For development, or for running single applications on your machine, they shouldn’t be much worse than running the application directly (other than the non-shared memory, as someone else pointed out).

                          1. 0

                            That’s a very reductive argument you are making. What’s the point of saying that a single container on a laptop with no orchestration is light-weight? Most people who have performance concerns are concerned with running in production.

                            However if you wish to continue down this road let’s talk about the fact that even in a dev environment you are using nat for something like a local webserver.

                            Finally - I just don’t understand the apologetics in the container world. Why be so defensive?

                            1. 1

                              That’s a very reductive argument you are making.

                              Well, the argument is reductive, because it’s being made in the reduced context of using containerized solutions for software deployment on one’s computer, or a small server. You were the first to mention performance concerns in the thread.

                              Most people who have performance concerns are concerned with running in production.

                              I’m sure there’s an argument to be had about how much overhead, but seeing as Google has successfully created a company worth hundreds of billions of dollars using containerized deployments, I’m a bit skeptical about them being business killers. In fact, I’d argue 99.99% of companies out there would be financially wiser by spending another $400/mo server to deal with any performance degradation, than having expensive developers waste time managing dependency hell and convoluted deployment stories.

                              And I say that as someone running a company with exactly 0 containers in production.

                              However if you wish to continue down this road let’s talk about the fact that even in a dev environment you are using nat for something like a local webserver.

                              I’d be incredibly impressed by any development environment pushing so much data between containers that NATing between them becomes a concern. My company deals with video streaming; our development environment streams video between containers; I’ve never noticed a measurable impact on the development experience, other than (maybe?) whatever tiny increase in CPU usage might make my battery life shorter by a few minutes per charge.

                              Finally - I just don’t understand the apologetics in the container world. Why be so defensive?

                              Containers are a tool. The right tool, at the right time, might make a huge difference (spoon vs. hammer). I think people aren’t defensive, they are just relaying how containers improved their particular experience.

                      2. 4

                        The comment about them being malicious is imo understated. Most containers in popular registries are filled with vulnerabilities if not outright malware.

                        Not disputing this, but could you give some examples of containers filled with malware? Especially in popular containers?

                        (Filled with vulnerabilities I can completely believe.)

                        1. 1

                          Well a simple google search will show hundreds of examples - here’s on from less than a few weeks ago:

                          https://blog.aquasec.com/threat-alert-kinsing-malware-container-vulnerability

                          1. 1

                            That’s an attack on unprotected docker ports. Not examples of Most containers in popular registries are filled with vulnerabilities if not outright malware. I’d like to see a support for the claim that many containers (I assume you meant images) contain malware.

                            1. 2

                              Docker was called out for this behavior a while back ago because they refused to remove the malware after it being reported for over eight months.

                              https://arstechnica.com/information-technology/2018/06/backdoored-images-downloaded-5-million-times-finally-removed-from-docker-hub/

                              So people were like ok - we’ll set up our own registry and then you see things like this:

                              https://threatpost.com/docker-registries-malware-data-theft/152734/

                              Also - let’s not forget that Docker Hub itself was hacked last year:

                              https://threatpost.com/docker-hub-hack/144176/

                        2. 3

                          Are you sure you aren’t confusing container overhead with the VM overhead in Docker for Mac? I’ve never seen a container imposed significant CPU overhead and I work with them day in and day out.

                          1. 1

                            In local dev envs yes you’re correct you get a massive overhead from running them in a vm such as on a mac. In prod envs the overhead comes from the fact that they duplicate existing networking and storage layers (eg: overlay networks).

                            If it’s just one container on a linux laptop maybe you are right but the people I know don’t run one. If you wish you can qualify my statement with “production use”.

                            1. 2

                              Fair enough, but what’s the alternative? Throw out Fargate, K8s, etc and manually operate VMs (bin packing applications, configuring logging/ssh/process-management/monitoring/etc)? This seems penny wise and pound foolish.

                              1. 1

                                I’ve never found a need for containers and I know I’m definitely not alone. Also the argument that containers somehow simplify systems administration I find to be completely contrary to my experience of companies that have adopted them.

                                1. 1

                                  What is your experience? In my experience, it’s much easier to wrap an application in a Docker image and send it off to Fargate than to solve all of those problems yourself with VMs. We were able to transition our ops team into a Cloud Engineering team–rather than operating our individual applications, coordinating quarterly deployments, babysitting VMs, etc, we now focus on showing the dev teams how to use CloudFormation and CircleCI to build out their own pipelines and infrastructure that they in turn operate, monitor, etc. They deploy their own applications as often as 12 times per day. Obviously not all of this was containers–containers aren’t a silver bullet–but there’s no way we could have pushed the overhead of managing VMs onto our dev teams.

                                  1. 1

                                    Quarterly deployments?

                                    Maybe this is just a difference of work environments rather than tooling.

                                    I was a very early adopter of ec2 back in 2006? and have heavily used public cloud vms since in production deployments (I’ve worked at SF Bay companies for the past 10+ years) and was a heavy bare metal user for even longer prior.

                                    In all these years of both bare metal && public cloud vms I’ve always worked at companies that were pushing deploys in excess of ten times/day. I would not work at a company that didn’t. As for the circleci comments - I’d also not work at a company that doesn’t have CI. That’s kind of table stakes.

                                    As for baby-sitting servers - I suppose that depends on the task. It’s my experience that companies in the bay that have embraced containers have much larger (dev)/ops teams than those that don’t.