After @david_chisnall did the interview with me it is finally time to continue the relay with someone with a lot more exposed skin in the security game, particularly through the browser I hope needs no introduction, namely Firefox. I am, of course, talking about Frederik Braun, perhaps locally better known as @freddyb.
Introduce yourself, describe what you do for work and how long you’ve been at it.
Hi! So, I’m a computer nerd living in Berlin with my family. I currently work as a manager for the Firefox Security team, but I do not speak for Mozilla in this post.
My fascination with computers started with DOS games (Commander Keen) on my father’s PC in the early 1990s. He created simple batch scripts so my brother and I could just power on the machine and type the game name. The scripts were basically cd keen followed by keen.exe. Very simple stuff, just changing the directory and calling the executable. However, I soon noticed that the scripts worked only once. Maybe obvious to the reader, but when the game quits, you would still be in that subdirectory. Figuring out what was broken and how to fix it, got me into computers.
But I better skip ahead for a bit. I already had a bit of an understanding of programming, various network protocols a bit of web security by the time I went to uni, and was lucky enough to find a group of like-minded people in university, the Ruhr-University Bochum: This led to us founding the CTF group fluxfingers.
We took it way too seriously for a while: Back then, with only 2-3 competitions happening per year, we still practised every week and aimed to solve previous competitions until we had at least two vulnerabilities per application. At some point, the professors caught wind of that and I was lucky enough to co-create the first lecture on web security (“hackerpraktikum”), with my friend Reiners. We taught it for a couple of years and even made it into national TV.
Winning a CTF with fluxfingers at some point allowed me to meet folks from Mozilla, which in turn helped me land an internship there: Three months in sunny California with paid airfare, accommodation and a salary. An unbelievable offer. I met a lot of really great people and was allowed to do lots of fun things: I helped run Mozilla’s first and only MozillaCTF; I helped pentest some web apps and gave presentations about web security. It was a really great trip. Mozilla is such a great place to work, I think they spoiled me for life.
As much as I enjoyed spending a winter in warm Silicon Valley, I missed my friends and family at home. I still had some months of university to go, but knew that I did not want to live in the USA. Lucky for me, Mozilla had just started opening up an office in Berlin, Germany. So when I finished my studies (with a diploma thesis (PDF) on browser & web security in the (then new) HTML5 world of rich internet applications) in 2012, I applied for a full time position to continue working on Security at Mozilla.
My first years at Mozilla were defined by constant change: I helped break & fix web applications before they were allowed to go live on a .mozilla.org subdomain. I contributed to various specifications, like Content-Security-Policy in the W3C and also helped write DevTools & CSP patches in Firefox. Soon after, I found myself in a bigger reorg and landed in the Firefox OS Security team.
How was it like working on FirefoxOS through Mozilla?
The FxOS project was pretty wild, very skunkworks, but also very deeply technical. We provided lots of custom Web APIs that allowed controlling the phone and the hardware in new and interesting ways - way before Googlers spun this idea into PWAs or “Fugu APIs”. The crux was security, though. Our privilege separation model built on the security review process that we had already established for Firefox add-ons: It required that apps be reviewed and signed in order to use phone APIs. In turn, JavaScript just got way more interesting. You could make the phone ring, the battery vibrate, etc. etc. Lots of work went into standardizing these things, but other browser makers were probably not very interested in competition for their mobile operating systems back then. I also built some cool privacy overrides: You could revoke individual app permissions (e.g., disable specific APIs) regardless of what their app manifest said. I’m sure app developers really hated it, but I also know that nerds loved it. This was way before other phone operating systems allowed these kinds of global controls.
We also had a lot of XSS-style security bugs in these privileged apps and it was massively fun trying to drive cohesive, wide-scaling security efforts into this hacky, ambitious underdog project. At first, we gave all apps a mandatory Content-Security-Policy (CSP). Then, we had CSP bypasses and other injections leading to information stealing or clickjacking attacks. However, we drove two interesting security projects that managed to live on way beyond Firefox OS: The first project is the eslint rule “no-unsanitized”, which I still maintain. The rule essentially disallows your JavaScript code from patterns which are prone to XSS (e.g., assigning into innerHTML). We managed to bring that rule to all apps that were part of the Firefox OS system by rewriting a lot of code, which solved the majority of our XSS issues.
The second cool thing is Subresource Integrity, SRI: It was inspired by two things. First, Firefox OS becoming more “webby” and hosted versus packaged and signed. And secondly, a problem I saw with increasing usage of CDNs: A lot of pages wanted to boost their performance by loading jQuery, Bootstrap etc. from a shared URL. If all websites use that “default URL” for the library, any visitor might already have the library in their cache and the website would load faster. I was baffled - no, annoyed that people would risk their web security to random third-party domains. Apparently, I was loud enough that Brad Hill (back then, the co-chair of the W3C webappsec group) told me there is a future deliverable in the working group and that sort of got me looking at it.
Working on standards can be very long and time consuming. But in the end - and through a great collaboration with Mike West, Devdatta Akhawe, François Marier, Joel Weinberger, and many others - we shipped it as SRI. With this, you can compute e.g., the SHA256 of a JavaScript file that is hosted elsewhere and put in the integrity attribute of your <script> tag. This allows the browser to check whether a CDN-hosted third-party script is actually matching the expectation of the primary website’s author.
For FirefoxOS, I hoped to build upon SRI to create web-hosted packaged apps with higher privileges, but that never came to be. The project was shut down before we got there. I still think SRI nicely solved the security problem that I saw with CDNs.
However, nowadays browsers partition their caches in order to prevent cross-site tracking. Because of that, SRI is nowhere as useful as it was. In fact, I agree with Terence Eden here and believe these asset CDNs are also not very useful anymore either: With HTTP2 (and HTTP3) multiplexing, browsers are able to fetch necessary subresources from the existing connection much faster than reaching out to the CDN on a different socket. So that chapter is likely closed.
What did you continue with after working with Firefox OS?
After Firefox OS, another re-org brought me to Firefox Security. At first, I applied the XSS / eslint stuff from FxOS to Firefox desktop, which also has quite a lot of UI code written in JavaScript. This code is in the privileged un-sandboxed process. So that work led to some impactful bugs, which I wrote down in a blog post called Remote Code Execution in Firefox beyond memory corruptions.
Most of this should be solved by now: Firefox has adopted the eslint rule quite widely and has also invested massively in defense in depth (e.g., enforced CSPs, implicit sanitizing). The full project was published in a whitepaper Hardening Firefox against Injection Attacks, with Christoph Kerschbaumer and Tom Ritter.
Then, I went on to poke some further holes in the sandbox, which is explained in Examining JavaScript Inter-Process Communication in Firefox that YouTuber LiveOverflow made into a video What is a Browser Security Sandbox?! (Learn to Hack Firefox). In 2020, I also started working on the Sanitizer API in the webappsec working group.
But then in summer of 2022, I totally switched gears: When a manager of a different team left, we were undergoing a bit of a reorg again and our group was short of a manager. I offered to take the position and have been managing Firefox Security Engineering for almost two years now. The team is just incredible. I get to work with amazing folks. I now spend my days supporting a great set of people that make Firefox secure, private and user-first.
Just recently, when our browser was targeted at pwn2own 2024, we worked with a wide group of teams across Mozilla to get a security release out the door in less than 24 hours. A track record that we have kept for real-life zero days as well as exploit competitions for many years now.
At this time, we work on web security improvements (Fingerprinting Protections, HTTPS-Only) Sandbox escapes, hardening, Fuzzing, and much more).
What is your work / computing environment like?
For a long while, I loved using the X-series thinkpads, mostly for their portability. Originally Ubuntu with awesomewm as a window manager. My setup broke a bit too often for me to be efficient, so I eventually switched to a vanilla Ubuntu build for a long while.
However, that became untenable when I started working on Firefox proper: The build times on an X-series Thinkpad were really unbearably long. At first I had a workstation in the office where I could SSH into when I wanted to develop & build Firefox, but at some point the workstation broke and I made the jump to macOS. Now, I can have a build of Firefox in less than 15 minutes :-)
Though, with my switch to management, my tools of choice changed quite a bit: An online word processor, email, chat, bug trackers, video meetings.
When I do code, I use vscode with clangd. However, I would like to switch to helix if I’d only find the time :)
I still like tiling windows, so I use rectangle to keep that just a quick keyboard shortcut away. I also use beams to remind me of upcoming video meetings, making them one-click.
Building on your experience with browser security, if you could go back in time and change or add something anywhere in the full stack as a whole in order to improve security, W3C be damned, what would you do?
I actually really like how the web is built. I know that everything is constrained by backwards compatibility, but I also don’t want the web to lose that: It’s amazing that most web pages from the 1990s still work perfectly fine and I wouldn’t want to change that.
I love the web because it’s the one platform has defied corporate control over and over. I really enjoy working towards keeping it that way. The lack of “central control” makes it messy and weird. Sometimes I resent that every change is piecemeal and half-assed, but that’s 1000 times better than full corporate control or someone forcing changes on all web pages unilaterally.
With all this baggage and inertia aside, I believe that the requirement for a “Secure Context”, has made the web stronger and a 2020s-milestone ina similar vein would be a great improvement of the web.
Requiring websites to be more resilient to widespread & prevalent attacks by nudging them into better development practices should be possible again. There have been multiple attempts and I think they are worth pursuing.
Bringing up or onboarding new security engineers can be a big task. Are there any lessons from the CTF scene that proved useful to you, or would be useful for newcomers to learn from? There’s an overwhelming amount of material for computing students looking into the offensive side of security but much less so for defensive work and even less for browser work. Which resources and exercises would you suggest for ‘onboarding’ the coming generations?
Quick disclaimer: When I played CTFs before 2007 the bugs were simple and so were the exploits. We did not get to bypass sandboxes and we did not have to target modern brwosers.
However, I think the most underrated thing in playing CTFs is teamwork and community. Finding a bug is a hell of a rush, but it’s also not the most important thing: Spending time with friends and learning from them might be, though.
A great way to learn about defense is to practice offense: Once you believe you have fully understood an attack and can replicate it, you also get a good feeling for what kind of stop-gaps would really upset an attacker. As a defender, you need to walk up and down the abstraction layers just as fluently as an attacker, such that your mitigations aren’t easily by-passable.
A favorite example I have is from work again. We had a very gnarly XSS bug in Firefox that allowed attackers to inject into the privileged (non-sandboxed!) parent process of the browser. This can happen because Firefox’s browser UI is written in web technologies like HTML, JS and CSS. We fixed this not by looking at the specific XSS bug (something like foo.innerHTML = attackerControlledString, but by looking at the vulnerability class from a lower abstraction layer:
Given that we control both the JS/CSS/HTML code as well as the underlying browser runtime, we just changed the whole implementation of innerHTML=. Now, when we run the steps for assigning to innerHTML in a privileged context, Firefox will always perform an XSS-sanitization on the input string.
By going an abstraction level deeper, we could look beyond code audits or security guidelines. Instead, we completely kill a whole vulnerability class.
Most of my thinking here is also deeply inspired by the langsec folks, Meredith L. Patterson and her talk “The Science of Insecurity”, Sergey Bratus, and Travis Godspeed (look for his “packet in packet” work).
At the end of the day, regardless of whether you work in offense or in defense. I want you to ask yourself: For whom?
I tried to stay away from forums this past week on account of work and side projects I want to finally wrap up but I just wanted to say how cool this was! I know @freddyb doesn’t speak for Mozilla here but I think general anti-web smuggery prevents an embarrasingly wide crowd of computer people from appreciating just how bloody extraordinary web browser security work is. If our industry has an outer rim, web browsers are definitely one of the cantinas on Tatooine.
@freddyb, got time for an impromptu Q&A session? :-)
You guys rock, I’m super happy you managed to do this write-up after all!
I stopped CTF hacking in my free time, once I started a full time job. I also moved to Berlin and playing with fluxfingers remotely just wasn’t that much fun. If I had infinite free time (or maybe when the kids are older?), I would like to continue though. :)
Our interns are mostly working on features. More rewarding, better planning. However, I wouldn’t be too concerned with the mindset for an offensive project either: There is always something to be found or improved. My current favorite advice for managing interns is a) give them a lot of exposure, b) allow them to ask a lot of questions (and listen carefully!), and c) make sure they have plenty of opportunity to share. Especially when remote: Tell them they should rather overshare than be too timid :)
Don’t speak for my employer etc etc. - Lots of worries though :) First, I think market share dominance is a problem, including developers not testing/building with more than one browser in mind. With my favorite browser (cough) blocking a lot of trackers by default, it’s hard to estimate how bad the situation really is though. I could imagine someone is undercounting too. Secondly, I’m worried about content: Features that help “filling out forms” with AI could alienate people from the web, making them more likely to watch content in apps rather than web content.
nit: s/@freddieb/@freddyb
Interesting interview! Just curious if you could expand on this:
Most people in tech are privileged enough to be a bit picky about who they work for. Use that privilege to work towards the change you want to see in the world.
@freddyb, if you don’t mind another question: With the rise of segmented caches, and with that a slight swing back to people hosting their own resources rather than large CDNs, is there any benefit to adding SRI to all of your stylesheets & javascript?
I want to add a filter to my static site to inject the attribute, but I’m pretty sure that would not improve anything. I still like the idea of documenting the SHA256 hash, for whatever it’s worth.
If the subresourcse are under your control and same-origin, then there is no value at all.
I don’t know how other engines have implemented it, but I know the Firefox implementation is adding an additional computation & comparison step at the end of the request and before execution/rendering. Leaving it out will make things faster, though probably not noticeably either way.
But hey, maybe you learn something cool when building the filter and that’s worth it?
Would you mind explaining why segmented caches are important for this? Do they reduce traffic so much compared to regular file based caches?
I think maybe you have misunderstood what segmented caches are?
See here: https://www.peakhour.io/blog/cache-partitioning-firefox-chrome/
Oh, I would have called that partitioned. And there is apparently something called segmented cache which is for range queries: https://docs.fastly.com/en/guides/segmented-caching
Thanks for continuing this series! This kind of thing feels like community building to me, and I like hearing positive stories about lives in tech.
I hope the relay keeps going!
nit: link to the diploma thesis is broken.
Thanks!
@crazyloglad pls fix :)
I just clicked out of curiosity and thought I report it
can’t edit the post - mods?
Fixed!
That modlog entry… phew.
For anyone who might wonder what this meant in the future: the moderation log entry wasn’t salty, just long: it includes the whole text of the interview both before and after the edit.