1. 25
  1.  

  2. 11

    The Firefox snap does not support the NativeMessaging protocol 25 yet but this feature is planned to be added soon.

    This means that the 1Password extension cannot be unlocked by the desktop application and vice-versa. It’s been broken since they introduced the snap and I’m disappointed they’re shipping another major version with this still broken.

    1. 3

      Similarly Firefox snap edition will often crash due to font issues (at least with Japanese fonts) and it makes fonts unload and FF kinda unusable in this state

      There are workarounds but snap is still pretty rough around the edges…. I get the value of snaps but would not mind if the sandboxing was toned down a bit so that applications could work.

      1. 2

        I’d guess thus the keepassxc integration is also broken?

        1. 1

          yep, it’s broken now..

      2. 4

        I don’t know how long it’s existed, but I really appreciate the option for a “minimal” install. I put this on my testing laptop and it was nice starting from a mostly blank slate.

        1. 4

          . We’ve enabled the userspace OOMD service and are shipping the systemd-oomd package by default on the “Ubuntu Desktop” flavour, to avoid overloaded systems and the need of the kernel’s OOM killer to kick in.

          This is a concerning choice, IMHO, at least in the current incarnation. Firefox started “crashing” on me randomly after upgrade to this release. No dialogs, no warning of any kind, just disappeared off the screen 2 maybe 3 times now. Only just now after reading these release notes do I now know why.

          I bet a lot of, “why do my apps keeping randomly crashing/closing/disappearing on Ubuntu?” type bugs/issues will crop up if this behavior doesn’t change. Even a simple notification (e.g. Systemd has closed Firefox: out of memory.) would go a long way.

          On a brighter note, the battery life of my laptop when sleeping seems to have increased considerably. I’m quite happy with that change. :)

          1. 2

            Careful with GCC 11. They made a change in C++, they are more strict about #includes of headers [1]. Expect a lot of existing code to break. I actually had to downgrade to get very large projects to build again.

            [1]: https://gcc.gnu.org/gcc-11/porting_to.html#header-dep-changes

            1. 2

              They made a change in C++, they are more strict about #includes of headers.

              What they did is stop including headers they don’t need. Code that relied on such (non-portable, I must add) side-effects was always broken and now has to be fixed.

              1. 1

                Linus Torvalds would reply: “We do not break user space!” If a Linux API works one way for 20 years, and everyone relies on it, you don’t just break everyone’s code. Why is GCC/C++ different?

                Why is code that compiled and worked perfectly well for 20 years broken? Perhaps it is broken by some definition, but it begs the question if the standard is broken and the code is correct. About your comment that the code is non-portable, I already wrote a lengthy blog post about this subject: https://mental-reverb.com/blog.php?id=24

              2. 1

                Could they not have added a warning for a couple of versions before implementing this? It seems like one of those things that should be easy for a compiler to check for, and automatically fix or warn. Like “hey, you gotta add #include since you’re using it, and not depend on another standard library header to include it for you…”

                1. 1

                  Sounds like a nightmare of special cases tbh. It’s the same definition, so you have to keep track of the whole chain of includes. Actually multiple chains and their ordering, because you could first include the class through the side-effect and then explicitly. And then you have things created through defines. And you have to keep the list of the cases to warn about.. :scream:

                  I think that’s one of those things that seems simple to do, but isn’t :-( gcc keeps only the first place something was defined if I remember correctly.

                  1. 1

                    It’s a simpler problem than that because it’s a standard library with a set of well-defined symbols and you need to handle the case where these identifiers are used but are not defined. You can maintain a list of function and type names and the headers that the standard says they come from and tell people what header they’re missing. Clang does this for a load of C standard functions.

                2. 1

                  To be explicit, this is not a change with GCC, it is a change with libstdc++ and will affect any compiler using the same implementation. Historically, libstdc++ has always had a policy of minimising header includes, which is why code written against libc++ (which doesn’t) often fails to compile. The fix is always trivial: add the headers that the standard says you need to include. With modules (any decade now), all of this will go away and I can just put import std; in my C++ source files if I want to use the stdlib.

                  This is a far less annoying bug than the one in 20.04’s GCC 9.4, where the preprocessor evaluates __has_include(<sys/futex.h>) to false, in spite of the fact that #include <sys/futex.h> works fine. This has no simple work around and is why I’ve given up supporting GCC 9.4 on Ubuntu for some projects: GCC 10, 11, or any version fo clang is fine. Apparently the bug is something to do with the weird fixincludes thing that GCC does, meaning that the search paths for #include and __has_include diverge and so __has_include is not reliable.