My dream is that I fire up Firefox and it doesn’t make a single network request until I click a bookmark or type a URL and hit enter. Do you think there’s any hope of getting that as an option? As it is I’ve found it’s impossible to configure this behavior without external tools.
Unfortunately not. There are many things we can’t do out of the box, like Netflix (DRM), OpenH264(5?). We’ll also need updated info for intermediate certificates and revocations and then updates for the browser itself and addons. I could go on.
Surely it’s technically feasible to invent a pref and put all of those checks behind this pref. But there’s no point in shipping a not-very-usable browser from our perspective. Conway’s law further dictates that every team needs their own switch and config and backend. :) :(
Why do DRM and OpenH264 require network connections on startup?
AFAIK it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine. Sure, you could download it on demand when the user first encounters an H264 stream … but it would put Firefox at an even greater disadvantage compared to browsers willing to pay the MPEG extortion fee.
I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine.
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
I am not the one who asked for this feature, but I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
That’s as far as I know and I’m too lazy to find out more 😝. Maybe the OP was talking about first launch?
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
… I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
It could work as an about:config option, but you would still have to convince someone to spend resources to get it mainlined. Hence why I suggested checking the extension API : )
Given Tor’s threat model, I would assume they would have already done a much more thorough job at eliminating network requests that would compromise privacy. And if not, they would have the organizational capacity and motivation to implement and upstream such a feature. The Tor Browser can be used as a normal browser by disabling Onion routing via an about:config setting.
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
Pre-fetching sometimes eliminates lag and sometimes causes it by taking bandwidth from more important things. Maybe OP meant to argue that these concerns are negligible and not deserving of a configuration option, but it’s hard to infer it from what they wrote.
My guess is that the Mozilla guy didn’t answer the question directly and it probably doesn’t actually download it with every start up as he seemed to imply.
I think it would be fair to include an option to allow power users to pull these updates rather than have these pushed. In the absence of this option, Mozilla is, or is capable of, collecting telemetry on my use of Firefox without my consent and violating the privacy ethos it espouses so much in its marketing.
If you proxy Firefox on launch (on Mac I use CharlesProxy) you can see the huge amount of phoning home it does at launch, even with every available update setting in Firefox set to manual/not-automatic.
This is from the top of my head. There are many differences. But here’s an interesting tradeoff:
Their UI has a native implementation which makes sandboxing and privilege separation easier. We have opted to implement our UI in HTML and JavaScript which is great for shared goals in accessibility, performance improvements, community contributions, and extensibility. But it also means that our most privileged process contains a full HTML rendering engine with JavaScript and JIT and all.
What percentage of the browser do you expect to be able to sandbox in this way? Isn’t there work going on to implement shared memory between WASM modules?
Firefox still does not do process isolation as thoroughly as chromium. Iirc GPU, audio, and networking is all done on the same process in Linux.
I believe Firefox still does not match Chrome’s level of site isolation either, though that could have changed in recent times.
Firefox uses a fork of jemalloc (an allocator aimed at performance, and not security), while they have added security improvements it isn’t as hardened as the one chromium uses.
Full disclosure: I use Firefox daily. I love it, but I don’t believe it to match Chromium’s security. It has gotten better in recent times though. @freddyb, please correct any misinformation in my post, I very well could be wrong about some of this.
GPU process is not used on Wayland yet but I think there was an old implementation for GLX (no idea about X11-EGL probably also no). Audio process is used on Linux for sure (I did some work to enable audioipc on FreeBSD).
Fission is the same site isolation, but it’s still not on by default. Check fission.autostart (or the friendly checkbox in Nightly about:preferences).
On the other hand, blocking ads and JS is much easier on Firefox so in that regard it has better security than Chrome (although not necessarily Chromium).
Chrome and Chromium support a user-visible per-site JS and per-domain cookie toggle. chromium also supports making third party frames opt-in by default.
While Mozilla has made progress on multiprocess and site isolation, Chromium is actually upgrading from site isolation to strict origin isolation. Chromium also has separate sandboxed processes for TTS, printing, and other functionality.
Chromium is also the only browser to support trusted-types for XSS protection.
Right now the chromium team is working on the V8 memory cage. They’re researching expanding toolchain hardening to leverage Clang’s shadow call stacks on top of its CFI implementation; Mozilla hasn’t enabled Clang’s CFI yet. Neither will matter much if you use distro packages that use system libs built with gcc, or worse: distro packages that build the whole browser with unsupported toolchains.
That being said, Firefox’s sandboxing of graphite and hunspell along with finally getting some site isolation with Fission are important steps forward, and I’m glad to see Firefox catching up. Hopefully they swap out mozjemalloc with something resembling ptmalloc. Or musl’s mallocng. Or…hardened_malloc…
No way to tell which ones actually use memory and whether I might want to reduce the number of processes created.
My motivation for limiting memory use over other aspects is that I use a dedicated profile for work stuff that involves outlook web, teams web plus jira and confluence. I absolutely don’t care if something crash there but I care that these memory hogs are somehow constrained. They could even be twice slower if they used even 10% less memory. Right now with FF 95, I’m completely at loss regarding memory usage.
Oh. I had already gone to about:processes but your comment made me spend more time in it and now I understand it better. TBH the UX could really be improved. At least the PID shouldn’t be only something at the end of the Name field because you might want to find by PID (if you’re looking at this because of something you’ve seen in another tool).
What I’d like to know is what the current model is. It’s not one process per tab plus one process for each domain per frame for each tab. I have a single process for two of my tabs (same domain) and I have one process for each of for my three lobste.rs tabs.
Also, is there a way to have more sharing or is that a thing of the past? In other words, is there any hope that my two awful outlook and teams tabs can share more?
Interesting: I noticed Ghostery being quite active… even after having paused it. Not sure what it was doing, but I’m having none if it any more. NoScript and uBlock are probably enough anyway.
The more script and content blockers you have the more code will when a script is attempted to load. If performance is dear to you, set the Firefox builtin tracking protection to strict mode. It’s a bit more performant to do less JS/C++ context switches but it adds up.
When Firefox is dealing with the 70-90% of undesired scripts, you’re addons will be less busy
I’ve always been under the impression that FF’s tracking protection ran after add-ons such as ublock because nothing is ever blocked on my machines while ublock blocks stuff.
Normal mode doesn’t block at all. It loads stuff but with a separate, isolated cookie jar. This has shown as the best balance between privacy protection and web compatibility. It’s shown that most of our users would blame the browser if some important iframe doesn’t work or show up.
Now with a loading tab the user has something to interact with.
So gor power users the recommendation is to set it to “strict mode”, which doesn’t isolate but actually block.
This is a very…. non-nuanced title. But hey, who am I to disagree. Anyway, shoot if you have questions :)
My dream is that I fire up Firefox and it doesn’t make a single network request until I click a bookmark or type a URL and hit enter. Do you think there’s any hope of getting that as an option? As it is I’ve found it’s impossible to configure this behavior without external tools.
Unfortunately not. There are many things we can’t do out of the box, like Netflix (DRM), OpenH264(5?). We’ll also need updated info for intermediate certificates and revocations and then updates for the browser itself and addons. I could go on.
Surely it’s technically feasible to invent a pref and put all of those checks behind this pref. But there’s no point in shipping a not-very-usable browser from our perspective. Conway’s law further dictates that every team needs their own switch and config and backend. :) :(
Why do DRM and OpenH264 require network connections on startup?
I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?
AFAIK it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine. Sure, you could download it on demand when the user first encounters an H264 stream … but it would put Firefox at an even greater disadvantage compared to browsers willing to pay the MPEG extortion fee.
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
I am not the one who asked for this feature, but I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
That’s as far as I know and I’m too lazy to find out more 😝. Maybe the OP was talking about first launch?
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
It could work as an about:config option, but you would still have to convince someone to spend resources to get it mainlined. Hence why I suggested checking the extension API : )
Given Tor’s threat model, I would assume they would have already done a much more thorough job at eliminating network requests that would compromise privacy. And if not, they would have the organizational capacity and motivation to implement and upstream such a feature. The Tor Browser can be used as a normal browser by disabling Onion routing via an about:config setting.
Pre-fetching sometimes eliminates lag and sometimes causes it by taking bandwidth from more important things. Maybe OP meant to argue that these concerns are negligible and not deserving of a configuration option, but it’s hard to infer it from what they wrote.
Not being privy to the details myself; I could see that count as “distribution” where download on boot does not. #NotALawyer
My guess is that the Mozilla guy didn’t answer the question directly and it probably doesn’t actually download it with every start up as he seemed to imply.
I think it would be fair to include an option to allow power users to pull these updates rather than have these pushed. In the absence of this option, Mozilla is, or is capable of, collecting telemetry on my use of Firefox without my consent and violating the privacy ethos it espouses so much in its marketing.
If you proxy Firefox on launch (on Mac I use CharlesProxy) you can see the huge amount of phoning home it does at launch, even with every available update setting in Firefox set to manual/not-automatic.
Mozilla seems to be running in the opposite direction with sponsored links showing up now in the new tab page, etc. I could be wrong though…
Serious question: What do you think could Firefox learn from Chrome security? For example, where does Chrome better?
This is from the top of my head. There are many differences. But here’s an interesting tradeoff:
Their UI has a native implementation which makes sandboxing and privilege separation easier. We have opted to implement our UI in HTML and JavaScript which is great for shared goals in accessibility, performance improvements, community contributions, and extensibility. But it also means that our most privileged process contains a full HTML rendering engine with JavaScript and JIT and all.
Has there been any consideration of tools like Caja to sandbox the JS that runs in that process?
Caja is for JS<>JS isolation, but the main threat here is in JS escaping to native code (e.g. through a JIT bug), where Caja has no power.
We’ve been using several restrictions in terms of what our UI code can do and where it can and cannot come from. E.g., script elements can’t point to the web but rather inside the Firefox package (e.g., the
about
URL scheme). We’ve also implemented static analysis checks for obvious XSS bugs and are using CSP. We’ve summarized our mitigation in this fine blog post here: https://blog.mozilla.org/attack-and-defense/2020/07/07/hardening-firefox-against-injection-attacks-the-technical-details/Well, if not the most secure web browser on the market, then definitely the second most secure! (never mind that there are only two)
I’m really glad to see this kind of partitioning being done!
What percentage of the browser do you expect to be able to sandbox in this way? Isn’t there work going on to implement shared memory between WASM modules?
A more technical overview of RLBox: https://plsyssec.github.io/rlbox_sandboxing_api/sphinx/
See also https://lobste.rs/s/wacazl/webassembly_back_again_fine_grained and https://hacks.mozilla.org/2021/12/webassembly-and-back-again-fine-grained-sandboxing-in-firefox-95/
I don’t believe the headline to be accurate:
Full disclosure: I use Firefox daily. I love it, but I don’t believe it to match Chromium’s security. It has gotten better in recent times though.
@freddyb, please correct any misinformation in my post, I very well could be wrong about some of this.
GPU and Networking are in their own socket where supported. We do have Site Isolation. The blog post you’re quoting is a bit out of date.
Thanks. I figured it was out if a bit out of date :)
Keep up the great work!
GPU process is not used on Wayland yet but I think there was an old implementation for GLX (no idea about X11-EGL probably also no). Audio process is used on Linux for sure (I did some work to enable audioipc on FreeBSD).
Fission is the same site isolation, but it’s still not on by default. Check
fission.autostart
(or the friendly checkbox in Nightly about:preferences).On the other hand, blocking ads and JS is much easier on Firefox so in that regard it has better security than Chrome (although not necessarily Chromium).
And Google continues to clamp down on it.
Chrome and Chromium support a user-visible per-site JS and per-domain cookie toggle. chromium also supports making third party frames opt-in by default.
While Mozilla has made progress on multiprocess and site isolation, Chromium is actually upgrading from site isolation to strict origin isolation. Chromium also has separate sandboxed processes for TTS, printing, and other functionality.
Chromium is also the only browser to support trusted-types for XSS protection.
Right now the chromium team is working on the V8 memory cage. They’re researching expanding toolchain hardening to leverage Clang’s shadow call stacks on top of its CFI implementation; Mozilla hasn’t enabled Clang’s CFI yet. Neither will matter much if you use distro packages that use system libs built with gcc, or worse: distro packages that build the whole browser with unsupported toolchains.
That being said, Firefox’s sandboxing of graphite and hunspell along with finally getting some site isolation with Fission are important steps forward, and I’m glad to see Firefox catching up. Hopefully they swap out mozjemalloc with something resembling ptmalloc. Or musl’s mallocng. Or…hardened_malloc…
I’d definitely love to see hardened_malloc in Firefox.
What’s painful is the following:
No way to tell which ones actually use memory and whether I might want to reduce the number of processes created.
My motivation for limiting memory use over other aspects is that I use a dedicated profile for work stuff that involves outlook web, teams web plus jira and confluence. I absolutely don’t care if something crash there but I care that these memory hogs are somehow constrained. They could even be twice slower if they used even 10% less memory. Right now with FF 95, I’m completely at loss regarding memory usage.
Try any of
Unfortunately, we need this high amount of processes to mitigate Spectre vulnerabilities. See https://hacks.mozilla.org/2021/05/introducing-firefox-new-site-isolation-security-architecture/ for more
Oh. I had already gone to about:processes but your comment made me spend more time in it and now I understand it better. TBH the UX could really be improved. At least the PID shouldn’t be only something at the end of the Name field because you might want to find by PID (if you’re looking at this because of something you’ve seen in another tool).
What I’d like to know is what the current model is. It’s not one process per tab plus one process for each domain per frame for each tab. I have a single process for two of my tabs (same domain) and I have one process for each of for my three lobste.rs tabs.
Also, is there a way to have more sharing or is that a thing of the past? In other words, is there any hope that my two awful outlook and teams tabs can share more?
Good point. Would you be willing to file a bug?
Sure, will do.
Interesting: I noticed Ghostery being quite active… even after having paused it. Not sure what it was doing, but I’m having none if it any more. NoScript and uBlock are probably enough anyway.
The more script and content blockers you have the more code will when a script is attempted to load. If performance is dear to you, set the Firefox builtin tracking protection to strict mode. It’s a bit more performant to do less JS/C++ context switches but it adds up.
When Firefox is dealing with the 70-90% of undesired scripts, you’re addons will be less busy
I’ve always been under the impression that FF’s tracking protection ran after add-ons such as ublock because nothing is ever blocked on my machines while ublock blocks stuff.
Normal mode doesn’t block at all. It loads stuff but with a separate, isolated cookie jar. This has shown as the best balance between privacy protection and web compatibility. It’s shown that most of our users would blame the browser if some important iframe doesn’t work or show up.
Now with a loading tab the user has something to interact with.
So gor power users the recommendation is to set it to “strict mode”, which doesn’t isolate but actually block.
Good point, I’ve been wondering if FF blocks first, then addons, or vice-versa or even a mix of the two. Any thoughts?
Does ‘about:performance’ help in any way?