Threads for Yogurt

  1. 2

    Package Registry

    Development on the Swift package manager is focused on starting work on an open source package registry server implementation in concert with the community. The goal is to create the technical components required to transition the Swift package ecosystem from one based on source control to one based registries, enhancing the security and reliability of the ecosystem. We will work with community-run projects such as the Swift Package Index to ensure great package discovery alongside the benefits that the registry brings.

    Interesting. Is there somewhere people are talking about this move? I’d love to know if it’s going to be a central index or about how they plan to integrate the package index.

    1. 1

      I don’t remember if I’ve heard Apple talk about making a central index before. But the primary discussion spot before implementation, besides formal proposals, is generally the Swift Forums. For what it’s worth, here’s the package manager section.

      1. 1

        Awesome. I’ll check it out, thanks!

    1. 20

      I guess that means hella- didn’t become official.

      1. 1

        Pour one out :(

      1. 10

        I’m glad I mostly use distro packages rather than language “package managers”, containers & static linking.

        If this is a client-side vuln we’ll also have to worry about the plethora of mobile apps who ship openssl, often unwittingly.

        1. 9

          I’m prepared! (I’ve typed out sudo apt update && sudo apt upgrade and have my finger hovering over the enter button.)

          1. 7

            Yup. I’m sure we’ll see a flurry of follow on advisories for the 80 billion packages that thought shipping a bundled version of openssl was a good idea.

            1. 8

              “but it’s easier to statically link / use a vendored share library / etc”.

              Deferring the tough problems until you’ve got 0-days in your production systems rather than dealing with complexity up front isn’t an especially great idea…

              1. 1

                I get the points about static linking, but in reality it’s not that difficult to prompt a rebuild of those packages that statically link openssl.

                1. 10

                  Assuming you have a list of packages that statically link OpenSSL :)

                  1. 4

                    Anything in rust that uses rust-openssl sometimes statically links OpenSSL…

                    1. 1

                      Thankfully rust’s openssl crate is pretty well maintained and pretty commonly used, so I expect we’ll see an update to that as well on tuesday

                      1. 3

                        How do I update all of the things that transitively depend on it? Across all my machines and all of the containers running on them?

                        1. 1

                          That I don’t know exactly, but cargo should know what versions get used in each builds and checking that against an eventual rustsec advisory shouldn’t be too hard.

                          1. 4

                            Well it turns out that cargo (and rustup) statically link OpenSSL, so depending on the vulnerability, you could hit and RCE when cargo goes to fetch the rust sec advisories. (Like if it’s exploitable in common TLS client usage and someone poisons yr DNS to tell your cargo to talk to their server)

                        2. 2

                          Amusingly, rust OpenSSL bindings are still on the 1.x version: 3.0 proven to be problematic for other reasons as well (build depends on less widely available Perl modules, some perf regressions).

                      2. 3

                        Pray also that no one decided to make one off patches to rename functions or change argument variables

                        1. 1

                          That’s basic package metadata which most package managers use.

                          1. 4

                            Ah, I more mean ad-hoc hand-compiled packages. Sorry I wasn’t more specific.

                            1. 1

                              Really regretting not maintaining a list

                            2. 2

                              Does this package statically link a vendored openssl 3.0? https://crates.io/crates/kv-assets What basic package metadata would indicate that?

                              1. 1

                                Many package managers require a list of dependencies, including compile-time-only deps. I’m not familiar with “crates” but I think rust programs have a Cargo.toml or Cargo.lock listing dependencies? Or does rust allow implicit deps?

                                1. 2

                                  Cargo.toml lists immediate dependencies. Cargo.lock lists transitive dependencies.

                              2. 1
                      1. 13

                        Filtering for applicability is great, and should have been a standard. I’m soooo tired of security tools crying wolf. Just like the desktop anti-virus software, they have lost sight of providing security, and became a security theatre and a numbers meta game.

                        CVE databases are full of low-effort reports, like flagging of uses of RegEx. That’s a slam-dunk “Possible DoS critical vulnerability”.

                        CVSS scoring rules are designed to be cover-your-ass for security vendors, not for usability by people who need to respond to these reports. The scoring rules are “we can’t know if you have written vulnerable code, but you may have, so that’s a critical vulnerability in the code we imagine you may have written”.

                        If npm install reports less than 17 critical vulnerabilities, I double-check if the installation went properly.

                        1. 11

                          Hate to say I agree with this. I work in security, and so many of my peers across now several organizations view the reports from these tools as absolute truth rather than raw data/input to further analysis. A vulnerability in a component does not necessarily translate in to a vulnerability in your application, but it is what scanners you and ultimately your customers run will report.

                          We’ve seriously lost our way. There needs to be more rigor in how these tools are applied. So much focus in “Supply Chain Security” is instead put on shifting responsibility to open source developers who absolutely did not sign up to perform vulnerability verification services for their commercial users.

                          1. 3

                            CVE databases are full of low-effort reports, like flagging of uses of RegEx. That’s a slam-dunk “Possible DoS critical vulnerability”.

                            The world of security research is a weird one. There’s an incentive to get as many CVEs at as high a severity ranking as possible to bolster your resume. No one in hiring seems to care about quality :(

                          1. 4

                            Host here. Curious to think of what Lobsters thinks of this back story of Android. It’s wild to think about what the world might look like if android hadn’t become some dominant and mobile was a Apple (or mircrosoft or whomever) proprietary monoculture.

                            1. 12

                              It’s worth remembering what the smartphone market looked like prior to the iPhone: Symbian had 70% of the market. The Symbian EKA2 kernel is beautiful (and the book about its internals is now a free download, everyone should read it) and Fuchsia’s design shares a lot in common with it. Symbian was moving towards open source and towards using Qt for the GUI (their existing userspace APIs were designed for systems with 2-4 MiB of RAM, which added a lot of cognitive load to solve problems that didn’t really exist on machines with 128+ MiB).

                              Android successfully killed Windows Phone and Symbian. If it hadn’t happened, then it’s quite likely that an EKA2+Qt stack would have become a major player. I wouldn’t be surprised if we’d ended up with a lot more diversity: I don’t think iOS would have killed Windows Phone and I don’t think either would have killed the EKA2+Qt. The really interesting question is what would have happened with things like Maemo / Meego / WebOS / FirefoxOS in a more competitive smartphone OS market.

                              The competing operating systems were mostly killed by network effects from the app stores. This is also what’s largely killed Android as an open-source platform: you can run AOSP on a handset, but if you don’t install the Google Play services then a load of third-party apps simply won’t run and you must use Google Play Services for a bunch of features (assuming you want those features) if you want to offer the app in the Google Play Store. Since most users use the Play Store, being excluded from there is a problem. I’m really looking forward to a competent antitrust regulator taking a look at this.

                              It’s entirely possible that one of the other vendors would have created a decent app store. HP had a very nice model for their WebOS app store where, in addition to their store, they provided the back-end services as something that you could repackage, so you could create your own store for your apps but people would still get all of the update and deployment things that they’d get buying from the HP storefront.

                              I think it’s more likely that we’d have seen much more of a push towards mobile-optimised web apps instead of native apps. If no platform had more than 20-30% market share then you could either build 5-6 native apps or one web app for the same market size and that’s a very different proposition to building two for the duopoly.

                              1. 5

                                I never experienced that Symbian beauty. I needed to port a simple game to it, and its developer-facing werido C++ side left such a terrible impression on me that I only remember promising myself to never touch Symbian again. I still wouldn’t use Qt.

                                Anyway, I think Nokia killed itself, and it had nothing to do with the OS kernel. They’ve ignored capacitative screens and tried to keep making featurephones until it was too late. Their popular low-end Symbian was unfit for the smartphone era, and their fancy Symbian, like Windows CE, was a toy desktop OS meant for running spreadsheets, not being a normal phone operated with fat fingers.

                                I miss WebOS. They did all the right things in software, but fast-enough hardware to run it it didn’t exist yet. They’ve ended up being a free touch-UI R&D for later copying by Apple and Google.

                                Windows Phone was a decent attempt, but at that point having a 3rd party app ecosystem mattered.

                                1. 1

                                  I never experienced that Symbian beauty. I needed to port a simple game to it, and its developer-facing werido C++ side left such a terrible impression on me that I only remember promising myself to never touch Symbian again. I still wouldn’t use Qt.

                                  Exactly. A beautiful kernel, hidden under an awful userspace.

                                  Anyway, I think Nokia killed itself, and it had nothing to do with the OS kernel

                                  I disagree. They had a great kernel and a terrible userland. They tried to fix this by replacing the kernel with Linux, which was completely inappropriate for devices with such a small amount of RAM (Android didn’t become successful until smartphones had at least 512 MiB of RAM). When this failed, they tried to jump on Windows Phone as an alternative.

                                  They had a few teams building some quite nice UIs on top of Qt, but they kept changing the platform out from underneath.

                                  I miss WebOS. They did all the right things in software, but fast-enough hardware to run it it didn’t exist yet. They’ve ended up being a free touch-UI R&D for later copying by Apple and Google.

                                  Completely agreed. I got a TouchPad in their free-toys-for-open-source-developers programme and loved it, but they killed the entire ecosystem a few weeks after I got mine.

                                  Windows Phone was a decent attempt, but at that point having a 3rd party app ecosystem mattered.

                                  Yup, my partner had one and it had a great UI (I was astonished - I expected to hate it and ended up liking it more than iOS and Android), but no apps and a really buggy sound subsystem (crashed every few days which made alarms silent and phone calls not work).

                                  1. 2

                                    A beautiful kernel, hidden under an awful userspace.

                                    Do you think the userspace could have been any better, given the resource constraints of the device and the state of C++ at the time? Would modern C++, or a newer language like Rust, enable a better userspace?

                                    A bit of a tangent: I keep wondering if a modern device that embraces constraints on CPU speed and RAM, similar to Symbian-based or early embedded Linux devices, could achieve outstanding battery life. Your comment the other day about your Psion computer running on two AA batteries made me think again about this. In particular, I wonder how good it could get in a device designed specifically for blind people, and thus having no screen, but only audio output. The big thing I’m not sure about is how much battery power is consumed by CPU and DRAM compared to the wireless radio(s), particularly when wireless connectivity is enabled but the CPU is mostly idle.

                                    1. 2

                                      Do you think the userspace could have been any better, given the resource constraints of the device and the state of C++ at the time? Would modern C++, or a newer language like Rust, enable a better userspace?

                                      There was PIPS, which provided POSIX (no fork but otherwise most of a POSIX syscall API). This would have, at least, been not worse than Linux / XNU as the system interface and would have made porting *NIX libraries much easier if it were the default. The kernel was explicitly designed to support multiple personalities, with the expectation that the EPOC32 interfaces would not last forever. POSIX probably wasn’t ideal but might have been necessary given that Android and iOS were both POSIX.

                                      The big thing I’m not sure about is how much battery power is consumed by CPU and DRAM compared to the wireless radio(s), particularly when wireless connectivity is enabled but the CPU is mostly idle.

                                      It depends a lot on how much you’re using the wireless. Active use can drain it quite a bit but modern wireless chipsets are really good at entering low-power modes where they are mostly asleep and wake up as passive receivers periodically to see if the base station has told them to wake up properly.

                                      From what I remember, I got about 30 hours of active use from the Psion. I remember it told me how much current it had drawn, but I don’t remember the numbers. Wikipedia tells me that alkaline AAs can provide up to 3.9Wh, so probably somthing like 7-8Wh for a pair of them (that sounds high, I vaguely remember it being closer to 3). At 8Wh over 30 hours, you’re looking at a maximum draw of 250mW. That’s quite a lot for a microcontroller, and I think low-power WiFi chipsets can run in about 50mW (based on a 10-year-old and probably wrong memory), so this seems fairly feasible.

                                  2. 1

                                    I was a Symbian fanboy and still miss my E61, but the “app store” (it didn’t really exist) was a mess. The entire user-facing software experience apart from the built-in apps was a mess. At one time there were 2 separate web browsers.

                                    Still, WirelessIRC is the best mobile IRC client I’ve ever used to this day.

                                  3. 2

                                    and the book about its internals is now a free download

                                    Do you happen to have a link to it?

                                    1. 1

                                      I think it’s more likely that we’d have seen much more of a push towards mobile-optimised web apps instead of native apps. If no platform had more than 20-30% market share then you could either build 5-6 native apps or one web app for the same market size and that’s a very different proposition to building two for the duopoly.

                                      That is interesting and I can see that fragmentation would favor the web. My understanding from this interview is that Google’s android bet was an attempt to keep the web (and google search accessed via the web) a viable thing in the mobile world and prevent the kind of monoculture that windows had in the desktop OS world pre-web.

                                      1. 3

                                        Android was probably nothing at all until the iPhone, and indeed, really, until the App Store; Google needed to prevent Apple from becoming Microsoft, as you note. Android has succeeded at its initial intent, and too, has become quite good, which is, if I may be snarky, unusual for a Google product that has achieved its aims.

                                      2. 1

                                        Since most users use the Play Store, being excluded from there is a problem. I’m really looking forward to a competent antitrust regulator taking a look at this.

                                        Honestly, I suspect app stores are a natural monopoly on a platform, and addressing that is the wrong symptom and likely to make the experience worse. The actual platforms themselves were what what end-users could care and make a choice over.

                                        You’re probably right on PWAs making a lot of sense for the average case that has no need to take advantage of platform APIs though.

                                        1. 5

                                          The problem isn’t the app store, the problem is the coupling of the Play Store and Play Services, which makes it incredibly difficult to ship a phone that doesn’t send a load of data to Google.

                                      3. 2

                                        Palm may have seen more success with the webos platform.

                                        1. 1

                                          Also I very much enjoy the show. Just wanted to get that out there!

                                        1. 1

                                          Probably with https://www.gnu.org/software/parallel/ or just pushing them to background tasks in a bash script

                                          1. 5

                                            Published versions 10.1.1 and 10.1.2 would wipe all files they could find/touch. 9.2.2 and 11.x would leave you a message on your desktop. Github/npm has removed versions 9.2.2 and the 10.x but left 11.x up. I’m curious what everyone’s take on that is.

                                            1. 16

                                              I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76

                                              1. 11

                                                It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.

                                                1. 1

                                                  Well I’d buy a machine, but they’re not selling anything with EU layouts or powercords.

                                                2. 5

                                                  I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.

                                                  Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.

                                                  There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                  Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.

                                                  1. 7

                                                    There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                    It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.

                                                    In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.

                                                    Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                    1. 4

                                                      There’s a lot of room for disruption in the DE segment of the desktop Linux market.

                                                      Ok, so now we have :

                                                      • kitchen sink / do everything : KDE

                                                      • MacOS like : Gnome

                                                      • MacOS lookalike : Elementary

                                                      • Old Windows : Gnome 2 forks (eg MATE)

                                                      • lightweight environments : XFCE / LXDE

                                                      • tiling : i3, sway etc etc (super niche).

                                                      • something new from scratch but not entirely different : Enlightment

                                                      So what exactly can be disrupted here when there are so many options ? What is the disruptive angle ?

                                                      1. 15

                                                        I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.

                                                        The space of UI models beyond these constraints is huge.

                                                        1. 5

                                                          I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.

                                                        2. 3

                                                          I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.

                                                          I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)

                                                          1. 3

                                                            I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time

                                                            I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.

                                                            Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.

                                                            I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.

                                                            My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.

                                                            But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                            1. 1

                                                              But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                              I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.

                                                          2. 2

                                                            GNOME really isn’t macOS like, either by accident or design.

                                                          3. 3

                                                            I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.

                                                            1. 7

                                                              Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.

                                                              1. 2

                                                                I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.

                                                                1. 2

                                                                  “The time of native apps is over .. webapps … the future”

                                                                  Non-rhetorical question: Why is that, though?

                                                                  1. 4

                                                                    Write once, deploy everywhere.

                                                                    Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.

                                                                    1. 3

                                                                      The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform

                                                                    2. 1

                                                                      I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.

                                                                  2. 2

                                                                    And that’s why the Linux desktop I use most (outside of work) is… ChromeOS.

                                                                    Now, I primarily use it for entertainment like video streaming. But with just a SSH client, I can access my “for fun” development machine too.

                                                                  3. 3

                                                                    Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                                    Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.

                                                                    1. 1

                                                                      GNOME HIG also keeps changing, which creates more fragmentation.

                                                                      20 years ago, they did express a desire of unification: https://lwn.net/Articles/8210/

                                                                  4. 1

                                                                    It certainly is a differentiator.

                                                                  1. 9

                                                                    Hot dog I thought the Enlightenment desktop was dead. Happy to see that it’s not :)

                                                                    1. 5

                                                                      It’s still getting frequent improvements! And Tizen uses EFL widgets. Enlightenment desktop even has full Wayland support!

                                                                    1. 45

                                                                      This is a very…. non-nuanced title. But hey, who am I to disagree. Anyway, shoot if you have questions :)

                                                                      1. 27

                                                                        My dream is that I fire up Firefox and it doesn’t make a single network request until I click a bookmark or type a URL and hit enter. Do you think there’s any hope of getting that as an option? As it is I’ve found it’s impossible to configure this behavior without external tools.

                                                                        1. 19

                                                                          Unfortunately not. There are many things we can’t do out of the box, like Netflix (DRM), OpenH264(5?). We’ll also need updated info for intermediate certificates and revocations and then updates for the browser itself and addons. I could go on.

                                                                          Surely it’s technically feasible to invent a pref and put all of those checks behind this pref. But there’s no point in shipping a not-very-usable browser from our perspective. Conway’s law further dictates that every team needs their own switch and config and backend. :) :(

                                                                          1. 9

                                                                            Why do DRM and OpenH264 require network connections on startup?

                                                                            I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?

                                                                            1. 9

                                                                              Why do DRM and OpenH264 require network connections on startup?

                                                                              AFAIK it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine. Sure, you could download it on demand when the user first encounters an H264 stream … but it would put Firefox at an even greater disadvantage compared to browsers willing to pay the MPEG extortion fee.

                                                                              I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?

                                                                              Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.

                                                                              1. 3

                                                                                it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine.

                                                                                Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?

                                                                                Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.

                                                                                I am not the one who asked for this feature, but I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).

                                                                                1. 1

                                                                                  Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?

                                                                                  That’s as far as I know and I’m too lazy to find out more 😝. Maybe the OP was talking about first launch?

                                                                                  Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )

                                                                                  … I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).

                                                                                  It could work as an about:config option, but you would still have to convince someone to spend resources to get it mainlined. Hence why I suggested checking the extension API : )

                                                                                  Given Tor’s threat model, I would assume they would have already done a much more thorough job at eliminating network requests that would compromise privacy. And if not, they would have the organizational capacity and motivation to implement and upstream such a feature. The Tor Browser can be used as a normal browser by disabling Onion routing via an about:config setting.

                                                                                  1. 1

                                                                                    Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )

                                                                                    Pre-fetching sometimes eliminates lag and sometimes causes it by taking bandwidth from more important things. Maybe OP meant to argue that these concerns are negligible and not deserving of a configuration option, but it’s hard to infer it from what they wrote.

                                                                                  2. 1

                                                                                    Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?

                                                                                    Not being privy to the details myself; I could see that count as “distribution” where download on boot does not. #NotALawyer

                                                                                    1. 1

                                                                                      My guess is that the Mozilla guy didn’t answer the question directly and it probably doesn’t actually download it with every start up as he seemed to imply.

                                                                              2. 3

                                                                                I think it would be fair to include an option to allow power users to pull these updates rather than have these pushed. In the absence of this option, Mozilla is, or is capable of, collecting telemetry on my use of Firefox without my consent and violating the privacy ethos it espouses so much in its marketing.

                                                                                If you proxy Firefox on launch (on Mac I use CharlesProxy) you can see the huge amount of phoning home it does at launch, even with every available update setting in Firefox set to manual/not-automatic.

                                                                              3. 9

                                                                                Mozilla seems to be running in the opposite direction with sponsored links showing up now in the new tab page, etc. I could be wrong though…

                                                                              4. 11

                                                                                Serious question: What do you think could Firefox learn from Chrome security? For example, where does Chrome better?

                                                                                1. 26

                                                                                  This is from the top of my head. There are many differences. But here’s an interesting tradeoff:

                                                                                  Their UI has a native implementation which makes sandboxing and privilege separation easier. We have opted to implement our UI in HTML and JavaScript which is great for shared goals in accessibility, performance improvements, community contributions, and extensibility. But it also means that our most privileged process contains a full HTML rendering engine with JavaScript and JIT and all.

                                                                                  1. 2

                                                                                    Has there been any consideration of tools like Caja to sandbox the JS that runs in that process?

                                                                                    1. 6

                                                                                      Caja is for JS<>JS isolation, but the main threat here is in JS escaping to native code (e.g. through a JIT bug), where Caja has no power.

                                                                                      1. 5

                                                                                        We’ve been using several restrictions in terms of what our UI code can do and where it can and cannot come from. E.g., script elements can’t point to the web but rather inside the Firefox package (e.g., the about URL scheme). We’ve also implemented static analysis checks for obvious XSS bugs and are using CSP. We’ve summarized our mitigation in this fine blog post here: https://blog.mozilla.org/attack-and-defense/2020/07/07/hardening-firefox-against-injection-attacks-the-technical-details/

                                                                                  2. 5

                                                                                    Well, if not the most secure web browser on the market, then definitely the second most secure! (never mind that there are only two)

                                                                                    1. 2

                                                                                      I’m really glad to see this kind of partitioning being done!

                                                                                      1. 1

                                                                                        What percentage of the browser do you expect to be able to sandbox in this way? Isn’t there work going on to implement shared memory between WASM modules?

                                                                                      1. 1

                                                                                        Noise and/or heat is going to be an issue in a 1u server like that. If you want a rack mountable case then aim for a 4u or larger, but really just build a tower if you’re not going to put it in a rack.

                                                                                        1. 1

                                                                                          Is this still on track to get merged into the kernel?

                                                                                          1. 2

                                                                                            This is exactly why I refuse to use npm, pip, etc. I only use the OS’s package manager, which uses a cryptographically signed package repo. I absolutely hate these hacks of workarounds.

                                                                                            1. 1

                                                                                              And you are sure that zero packagers use NPM or pip as a source for the OS packages and not the source repo? (Am I being paranoid now?)

                                                                                              1. 1

                                                                                                I’m sure there are. And I hate that. But at least it’s going through my OS’s package manager, making it easy to use a single interface for auditing potential security issues.

                                                                                              2. 1

                                                                                                The issue is that sometimes you’re much much behind. For example python-cryptography is still stuck at 3.2.1 on RHEL8… So either you use pip… or a very old version…

                                                                                                1. 1

                                                                                                  Fortunately, that’s not an issue I have being a BSD user using the nearly-always-up-to-date ports tree. I enjoy up-to-date software on a regular basis. Minimal lag between when a project’s release is published and when the ports tree gets updated to the new version.

                                                                                                  1. 2

                                                                                                    How is this different than using pip? You manually download the file?!

                                                                                                    1. 3

                                                                                                      The problem with per-language package repos like npm is that anyone and everyone has access to upload their project. That inherently means users must trust the most malicious of developers who upload malware to the repo.

                                                                                                      In the case of FreeBSD ports, the ports tree is gated by FreeBSD developers who have the opportunity to audit every single part of creating new ports or updating existing ports. It’s much easier to place trust in a (relatively) small set of developers who ensure sanity before committal.

                                                                                                      The package manager I use for my system (FreeBSD’s pkg) makes it incredibly easy to audit packages, even checking something called VuXML to check if any of your installed packages have known vulnerabilities. I can see which files (config, lib, application, etc.) have changed from their default since pkg tracks hashes for each file it installs. Additionally, the package repo itself is cryptographically signed so that it’s not possible to inject malicious code in transit. If the server hosting the package repo is compromised, there’s no problem since the private crypto key material is stored elsewhere. And this bit of crypto is protected by the OS itself.

                                                                                                      1. 1

                                                                                                        That’s fine in theory, but when someone packages a program for FreeBSD that uses a language-specific package manager, they use the built-in infrastructure in the ports tree that downloads the dependencies, then packages them in distfiles and records their hash. This is no more secure than pulling from upstream directly. The folks that package things for FreeBSD aren’t auditing the upstream any more than npm / pip / gem / whatever does.

                                                                                                        The only thing that the signature gives you is an attestation that the package was built on a FreeBSD build machine and has not been tampered with between there and you by anyone who did not have access to the signing key. It does not give you any assurance that the build machine wasn’t compromised or that there weren’t supply-chain vulnerabilities upstream from the builders.

                                                                                                        Most FreeBSD packages don’t use reproduceable builds, so you don’t have any assurance that your packages contain the code that they claimed they did: if you try to rebuild locally from the same port, you may or may not get the same binary. Is the one you got trojaned? Who knows.

                                                                                                        pkg audit is great, but npm and friends have similar things that tell you if there are published vulnerabilities in their libraries. They have two problems:

                                                                                                        • They tell you only about published vulnerabilities. Good projects will go through the process of getting a CVE assigned and doing coordinated disclosure. Others just push out a new version. The auditing tools tell you only about the former.
                                                                                                        • They are very coarse-grained. They don’t let you know if the vulnerability in a library is on a code path used by anything you have installed and they don’t let you know if that codepath (if it is reachable) is using any data that can be influenced by an attacker. So pkg audit shows a vulnerability in curl’s URL parsing. Does it matter? Is curl used only with trusted URLs? Maybe it’s fine, but can a server-side redirect trigger it?
                                                                                                    2. 1

                                                                                                      How minimal?

                                                                                                      1. 2

                                                                                                        Sometimes minutes. Sometimes hours. Sometimes days. It depends on the time and resources of a volunteer-run project. For example, I’ve seen FreeBSD update the Tor port just minutes after a new release. FreeBSD generally updates Firefox to RC releases so that we can test what will be the next version before it comes out (which means we have a negative time window in this particular case.)

                                                                                                        1. 1

                                                                                                          Sometimes minutes. Sometimes hours. Sometimes days.

                                                                                                          So basically the same boat as rhel then

                                                                                                1. 13

                                                                                                  I wonder why the kernel community seems to have structural issues when it comes to filesystem - btrfs is a bit of a superfund site, ext4 is the best most people have, and ReiserFS’s trajectory was cut short for uh, Reasons. Everything else people would want to use (i.e. ZFS, but also XFS, JFS, AdvFS, etc.) are hand-me-downs from commercial Unix vendors.

                                                                                                  1. 13

                                                                                                    On all of the servers I deploy, I use whatever the OS defaults to for a root filesystem (generally ext4) but if I need a data partition, I reach for XFS and have yet to be disappointed with it.

                                                                                                    Ext4 is pretty darned stable now and no longer has some of the limitations that pushed to me XFS for large volumes. But XFS is hard to beat. It’s not some cast-away at all, it’s extremely well designed, perhaps as well or better than the rest. It continues to evolve and is usually one of the first filesystems to support newer features like reflinks.

                                                                                                    I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions, my best guess as to why it hasn’t is some blend of “not-invented-here” and the fact that ext4 is good enough in 99% of cases.

                                                                                                    1. 3

                                                                                                      It would be great if the recent uplift of xfs also added data+metadata checksums. It would be perfect for a lot of situations where people want zfs/btrfs currently.

                                                                                                      It’s a great replacement for ext4, but not other situations really.

                                                                                                      1. 1

                                                                                                        Yes, I would love to see some of ZFS’ data integrity features in XFS.

                                                                                                        I’d love to tinker with ZFS more but I work in an environment where buying a big expensive box of SAN is preferable to spending time building our own storage arrays.

                                                                                                        1. 1

                                                                                                          I’m not sure if it’s what’s you meant, but XFS now has support for checksums for at-rest protection against bitrot. https://www.kernel.org/doc/html/latest/filesystems/xfs-self-describing-metadata.html

                                                                                                          1. 2

                                                                                                            This only applies to the metadata though, not to the actual data stored. (Unless I missed some newer changes?)

                                                                                                            1. 1

                                                                                                              No, you’re right. I can’t find it but I know I read somewhere in the past six months that XFS was getting this. The problem is that XFS doesn’t do block device management which means at best it can detect bitrot but it can’t do anything about it on its own because (necessarily) the RAIDing would take place in another, independent layer.

                                                                                                        2. 3

                                                                                                          I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions

                                                                                                          It is the default in RHEL 8 for what it’s worth
                                                                                                          https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_getting-started-with-xfs-managing-file-systems

                                                                                                        3. 2

                                                                                                          Yep. I use xfs for 20 years now when I need a single drive FS and I use zfs when I need multiple drive FS. The ext4 and brtfs issues did not increase my confidence.

                                                                                                        1. 4

                                                                                                          It is the default filesystem in Fedora since 33.

                                                                                                          1. 3

                                                                                                            Should I continue learning Django or is it better to move to the likes of Rocket and Actix? (0 experience in Rust , btw)

                                                                                                            And what’s the consensus on SSR vs REST + SPA?

                                                                                                            1. 6

                                                                                                              Play with rust and see if you like it first. No sense in discussing frameworks if you don’t even like working in the language.

                                                                                                              1. 4

                                                                                                                Should I continue learning Django or is it better to move to the likes of Rocket and Actix? (0 experience in Rust , btw)

                                                                                                                It depends on what you want to achieve. I can tell you billion dollar companies out there running on Django Python (e.g. Clubhouse). Rust itself has a higher learning curve and requires almost a rewire if you are traditional programmer, but gets a lot of things right, and makes you think through a lot you used to ignore. So pick what you want.

                                                                                                                what’s the consensus on SSR vs REST + SPA

                                                                                                                It’s hilarious that we keep doing these cycles between server heavy vs client heavy views. I’ve seen it 3 times in my life already. So learn it but don’t make it a religion.

                                                                                                                1. 2

                                                                                                                  I’m late to the party, but if you’re familiar with Python it’s probably worth starting here. If you’re not very experienced with webapp development Django is pretty nice (at least last few times I’ve used it).

                                                                                                                  I haven’t tried any of the web frameworks for Rust, but I imagine that the learning curve is steep with a new language, a new framework which is in heavy development and isn’t as well known as say Django. That means searching for help when running into problems is a lot harder.

                                                                                                                  1. 1

                                                                                                                    And what’s the consensus on SSR vs REST + SPA?

                                                                                                                    I’m not sure there is a consensus, but in the React world there seems to be solid momentum around “why not both?”. Next JS (and Gatsby?) will let you mix SSR, static, and client-side/SPA fairly seamlessly. Making a page static or SSR is often just a matter of moving your query to a specially-named function. Combined with Typescript it’s a pretty nice place to be. But, as with many things in JS-land, there can be rather a lot of complexity under the hood.

                                                                                                                  1. 18

                                                                                                                    What a stupid thing to need to do. Good on mozilla for getting it done though.

                                                                                                                    1. 18

                                                                                                                      Neat idea. I’m not sure this is a captcha, but rather just a rate limiter.

                                                                                                                      1. 13

                                                                                                                        So much this. A proof-of-work scheme will up the ante, but not the way you think. People need to be able to do the work on the cheap (unless you want to put mobile users at a significant disadvantage) and malware/spammers can outscale you significantly.

                                                                                                                        Ever heard of parasitic computing? TLDR: It’s what kickstarted monero. Any website (or an ad in that website) can run arbitrary code on the device of every visitor. You can even shard the work, do it relatively low-profile if you have the scale. Even if pre-computing is hard, with ad networks and live-action during page views an attacker can get challenges solved just-in-time.

                                                                                                                        1. 9

                                                                                                                          The way I look at it, it’s meant to defeat crawlers and spam bots; they attempt to cover the whole internet, they want to spend 99% of their time parsing and/or spamming, but if this got popular enough to prompt bot authors to take the time to actually implement WASM/WebWorkers or a custom Scrypt shim for it, they might still end up spending 99% of their time hashing instead.

                                                                                                                          Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing? I never made $1M a year in my life, probably never will, I would be glad to be able to generate that much value tho.

                                                                                                                          If it comes down to a targeted attack on a specific site, captchas can already be defeated by captcha farm services or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337). Defeating that kind of targeted attack is a whole different problem domain.

                                                                                                                          This is just an alternate approach to put the thumb screws on the bot authors in a different way, without requiring the user to read, stop and think, submit to surveillance, or even click on anything.

                                                                                                                          1. 9

                                                                                                                            This sounds very much like greytrapping. I first saw this in OpenBSD’s spamd: the first time you got an SMTP connection from an IP address, it would reply with a TCP window size of 1, one byte per second, with a temporary failure error message. The process doing this reply consumed almost no resources. If the connecting application tried again in a sensible amount of time then it would be allowed to talk to the real mail server.

                                                                                                                            When this was first introduced, it blocked around 95% of spam. Spammers were using single-threaded processes to send mail and so it also tied each one up for a minute or so, reducing the total amount of spam in the world. Then two things happened. The first was that spammers moved to non-blocking spam-sending things so that their sending load was as small as the server’s. The second was that they started retrying failed addresses. These days, greytrapping does almost nothing.

                                                                                                                            The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. CPU time on botnets is vastly cheaper than CPU time purchased legitimately. Last time I looked, it was a few cents per compromised machine and then as many cycles as you can spend before you get caught and the victim removes your malware. A machine in a botnet (especially one with an otherwise-idle GPU) can do a lot of hash calculations or whatever in the background.

                                                                                                                            Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing?

                                                                                                                            It’s a lot less than $1M/year that they spend. All you’re really doing is pushing up the electricity consumption of folks with compromised computers. You’re also pushing up the energy consumption of legitimate users as well. It’s pretty easy to show that this will result in a net increase in greenhouse gas emissions, it’s much harder to show that it will result in a net decrease in spam.

                                                                                                                            1. 2

                                                                                                                              These days, greytrapping does almost nothing.

                                                                                                                              postgrey easily kills at least half the SPAM coming to my box and saves me tonnes of CPU time

                                                                                                                              1. 1

                                                                                                                                The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. [botnets hash at least 1000x faster than the legitimate user]

                                                                                                                                Asymmetry is also the reason why it does work! Users probably have at least 1000x more patience than a typical spambot.

                                                                                                                                I have no idea what the numbers shake out to / which is the dominant factor, and I don’t really care; the point is that I can still make the spammers lives hell & get the results I want right now (humans only past this point) even though I’m not willing to let Google/CloudFlare fingerprint all my users.

                                                                                                                                If botnets solving captchas ever becomes a problem, wouldn’t that be kind of a good sign? It would mean the centralized “big tech” panopticons are losing traction. Folks are moving to a more distributed internet again. I’d be happy to step into that world and work forward from there 😊.

                                                                                                                              2. 5

                                                                                                                                captchas can already be defeated by […] or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337)

                                                                                                                                An earlier version of google’s captcha was automated in a similar fashion: they scraped the images and did a google reverse image search on them!

                                                                                                                                1. 3

                                                                                                                                  I can’t find a link to a reference, but I recall a conversation with my advisor in grad school about the idea of “postage” on email where for each message sent to a server a proof of work would need to be done. Similar idea of reducing spam. It might be something in the literature worth looking into.

                                                                                                                                  1. 3

                                                                                                                                    There’s Hashcash, but there are probably other systems as well. The idea is that you add a X-Hashcash header with a comparatively expensive hash of the content and some headers, making bulk emails computationally expensive.

                                                                                                                                    It never really caught on; I used it for a while years ago, but I’ve never received an email with this header since 2007 (I just checked). It seems used in Bitcoin nowadays according to the Wikipedia page, but it started out as an email thing. Kind of ironic really.

                                                                                                                                    1. 1

                                                                                                                                      “Internet Mail 2000” from Daniel J. Bernstein? https://en.m.wikipedia.org/wiki/Internet_Mail_2000

                                                                                                                                  2. 2

                                                                                                                                    That is why we can’t have nice things… It is really heartbreaking how almost all technology advance can and will be turned for something evil.

                                                                                                                                    1. 1

                                                                                                                                      The downsides of a global economy for everything :-(

                                                                                                                                  3. 3

                                                                                                                                    Captchas are essentially rate limiters too, given enough determination from abusers.

                                                                                                                                    1. 4

                                                                                                                                      Maybe. The difference I would make is that a captcha attempts to assert that the user is human where this scheme does not.

                                                                                                                                      1. 2

                                                                                                                                        I mean, objectively, yes. But, since spammers are automating passing the “human test” captchas, what is the value of that assertion? Our “human test” captchas come at the cost of impeding actual humans, and are failing to protect us from the sophisticated spammers, anyway. This proposed solution is better for humans, and will still prevent less sophisticated attackers.

                                                                                                                                        If it can keep me from being frustrated that there are 4 pixels on the top left tile that happen to actually be part of the traffic light than by all means, sign me the hell up!

                                                                                                                                  1. 1

                                                                                                                                    The triple DES key wrap functionality now conforms to RFC 3217 but is no longer interoperable with OpenSSL 1.1.1.

                                                                                                                                    I suspect that will functionally kill triple des off. Nice!

                                                                                                                                    1. 6

                                                                                                                                      Does anyone know if this sort of thing can be automated? There’s an old discussion on adding this to imagemagick, but seems like the conversation ended without a resolution.

                                                                                                                                      https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=8524

                                                                                                                                      1. 5

                                                                                                                                        It definitely can be done the grid pattern seems to be pretty regular. Which actually makes me wonder if the fft image could be easily fft’d again to only remove the 2? 4? dots of the pattern… (Going away to try) The answer is no, the repeated pattern is not as easy to spot as I expected. https://imgur.com/a/4GfUsqy

                                                                                                                                        1. 7

                                                                                                                                          So I dunno about 2D FFTs for images, but in the audio world this trick (looking for structure in an FFT using another FFT) is called “cepstral analysis”, and the product is a “cepstrum”, and the secret sauce is to take the log magnitude of the first FFT before applying the second. (Or, if you want it to be invertable, you can take the complex FFT of the complex log of the complex FFT.)

                                                                                                                                          1. 2

                                                                                                                                            Do you have the code for this somewhere?

                                                                                                                                            1. 7

                                                                                                                                              I used the same app the OP did. Fiji (https://imagej.net/software/fiji/)

                                                                                                                                              Although Fiji’s version of FFT is slightly different than what other software produces (it’s more pretty though :) ) I didn’t dig into the details for the difference. Edit: I’ve been nerd-sniped and did dig into the difference - you can get the same “prettiness” from gimp’s or imagemagick’s fft by splitting the color levels into pos/neg in the middle and applying logarithm-like color mapping - but you’re losing information that way https://imgur.com/a/GwYjchG Also they wrap the FFT sections differently. Fiji seems to hold the magnitude and phase in different images and shift the quarters around, while gimp keeps everything together.

                                                                                                                                              1. 1

                                                                                                                                                you can get the same “prettiness” from gimp’s or imagemagick’s fft by splitting the color levels into pos/neg in the middle and applying logarithm-like color mapping

                                                                                                                                                Interesting. Thanks for the follow up! :)

                                                                                                                                            2. 1

                                                                                                                                              the repeated pattern is not as easy to spot as I expected.

                                                                                                                                              Maybe that’s the bit where we could use ML. /sarcasm