Reminder that https://pagure.io/pagure exists and is a GPL source control platform. No hosts running this as a service so far as I know, but nothing stops anyone reading this from being that host.
As a matter of practice NVD re-scores every most CVEs that comes in https://nvd.nist.gov/general/cve-process
This is done because a large number of CVEs that come in are of very low quality and you do need humans in the loop.
If you wrote that CVE you posted then you likely forgot a field if your own score is missing.
Development on the Swift package manager is focused on starting work on an open source package registry server implementation in concert with the community. The goal is to create the technical components required to transition the Swift package ecosystem from one based on source control to one based registries, enhancing the security and reliability of the ecosystem. We will work with community-run projects such as the Swift Package Index to ensure great package discovery alongside the benefits that the registry brings.
Interesting. Is there somewhere people are talking about this move? I’d love to know if it’s going to be a central index or about how they plan to integrate the package index.
I don’t remember if I’ve heard Apple talk about making a central index before. But the primary discussion spot before implementation, besides formal proposals, is generally the Swift Forums. For what it’s worth, here’s the package manager section.
Yup. I’m sure we’ll see a flurry of follow on advisories for the 80 billion packages that thought shipping a bundled version of openssl was a good idea.
“but it’s easier to statically link / use a vendored share library / etc”.
Deferring the tough problems until you’ve got 0-days in your production systems rather than dealing with complexity up front isn’t an especially great idea…
That I don’t know exactly, but cargo should know what versions get used in each builds and checking that against an eventual rustsec advisory shouldn’t be too hard.
Well it turns out that cargo (and rustup) statically link OpenSSL, so depending on the vulnerability, you could hit and RCE when cargo goes to fetch the rust sec advisories. (Like if it’s exploitable in common TLS client usage and someone poisons yr DNS to tell your cargo to talk to their server)
Amusingly, rust OpenSSL bindings are still on the 1.x version: 3.0 proven to be problematic for other reasons as well (build depends on less widely available Perl modules, some perf regressions).
Many package managers require a list of dependencies, including compile-time-only deps. I’m not familiar with “crates” but I think rust programs have a Cargo.toml or Cargo.lock listing dependencies? Or does rust allow implicit deps?
Filtering for applicability is great, and should have been a standard. I’m soooo tired of security tools crying wolf. Just like the desktop anti-virus software, they have lost sight of providing security, and became a security theatre and a numbers meta game.
CVE databases are full of low-effort reports, like flagging of uses of RegEx. That’s a slam-dunk “Possible DoS critical vulnerability”.
CVSS scoring rules are designed to be cover-your-ass for security vendors, not for usability by people who need to respond to these reports. The scoring rules are “we can’t know if you have written vulnerable code, but you may have, so that’s a critical vulnerability in the code we imagine you may have written”.
If npm install reports less than 17 critical vulnerabilities, I double-check if the installation went properly.
Hate to say I agree with this. I work in security, and so many of my peers across now several organizations view the reports from these tools as absolute truth rather than raw data/input to further analysis. A vulnerability in a component does not necessarily translate in to a vulnerability in your application, but it is what scanners you and ultimately your customers run will report.
We’ve seriously lost our way. There needs to be more rigor in how these tools are applied. So much focus in “Supply Chain Security” is instead put on shifting responsibility to open source developers who absolutely did not sign up to perform vulnerability verification services for their commercial users.
CVE databases are full of low-effort reports, like flagging of uses of RegEx. That’s a slam-dunk “Possible DoS critical vulnerability”.
The world of security research is a weird one. There’s an incentive to get as many CVEs at as high a severity ranking as possible to bolster your resume. No one in hiring seems to care about quality :(
Host here. Curious to think of what Lobsters thinks of this back story of Android. It’s wild to think about what the world might look like if android hadn’t become some dominant and mobile was a Apple (or mircrosoft or whomever) proprietary monoculture.
It’s worth remembering what the smartphone market looked like prior to the iPhone: Symbian had 70% of the market. The Symbian EKA2 kernel is beautiful (and the book about its internals is now a free download, everyone should read it) and Fuchsia’s design shares a lot in common with it. Symbian was moving towards open source and towards using Qt for the GUI (their existing userspace APIs were designed for systems with 2-4 MiB of RAM, which added a lot of cognitive load to solve problems that didn’t really exist on machines with 128+ MiB).
Android successfully killed Windows Phone and Symbian. If it hadn’t happened, then it’s quite likely that an EKA2+Qt stack would have become a major player. I wouldn’t be surprised if we’d ended up with a lot more diversity: I don’t think iOS would have killed Windows Phone and I don’t think either would have killed the EKA2+Qt. The really interesting question is what would have happened with things like Maemo / Meego / WebOS / FirefoxOS in a more competitive smartphone OS market.
The competing operating systems were mostly killed by network effects from the app stores. This is also what’s largely killed Android as an open-source platform: you can run AOSP on a handset, but if you don’t install the Google Play services then a load of third-party apps simply won’t run and you must use Google Play Services for a bunch of features (assuming you want those features) if you want to offer the app in the Google Play Store. Since most users use the Play Store, being excluded from there is a problem. I’m really looking forward to a competent antitrust regulator taking a look at this.
It’s entirely possible that one of the other vendors would have created a decent app store. HP had a very nice model for their WebOS app store where, in addition to their store, they provided the back-end services as something that you could repackage, so you could create your own store for your apps but people would still get all of the update and deployment things that they’d get buying from the HP storefront.
I think it’s more likely that we’d have seen much more of a push towards mobile-optimised web apps instead of native apps. If no platform had more than 20-30% market share then you could either build 5-6 native apps or one web app for the same market size and that’s a very different proposition to building two for the duopoly.
I never experienced that Symbian beauty. I needed to port a simple game to it, and its developer-facing werido C++ side left such a terrible impression on me that I only remember promising myself to never touch Symbian again. I still wouldn’t use Qt.
Anyway, I think Nokia killed itself, and it had nothing to do with the OS kernel. They’ve ignored capacitative screens and tried to keep making featurephones until it was too late. Their popular low-end Symbian was unfit for the smartphone era, and their fancy Symbian, like Windows CE, was a toy desktop OS meant for running spreadsheets, not being a normal phone operated with fat fingers.
I miss WebOS. They did all the right things in software, but fast-enough hardware to run it it didn’t exist yet. They’ve ended up being a free touch-UI R&D for later copying by Apple and Google.
Windows Phone was a decent attempt, but at that point having a 3rd party app ecosystem mattered.
I never experienced that Symbian beauty. I needed to port a simple game to it, and its developer-facing werido C++ side left such a terrible impression on me that I only remember promising myself to never touch Symbian again. I still wouldn’t use Qt.
Exactly. A beautiful kernel, hidden under an awful userspace.
Anyway, I think Nokia killed itself, and it had nothing to do with the OS kernel
I disagree. They had a great kernel and a terrible userland. They tried to fix this by replacing the kernel with Linux, which was completely inappropriate for devices with such a small amount of RAM (Android didn’t become successful until smartphones had at least 512 MiB of RAM). When this failed, they tried to jump on Windows Phone as an alternative.
They had a few teams building some quite nice UIs on top of Qt, but they kept changing the platform out from underneath.
I miss WebOS. They did all the right things in software, but fast-enough hardware to run it it didn’t exist yet. They’ve ended up being a free touch-UI R&D for later copying by Apple and Google.
Completely agreed. I got a TouchPad in their free-toys-for-open-source-developers programme and loved it, but they killed the entire ecosystem a few weeks after I got mine.
Windows Phone was a decent attempt, but at that point having a 3rd party app ecosystem mattered.
Yup, my partner had one and it had a great UI (I was astonished - I expected to hate it and ended up liking it more than iOS and Android), but no apps and a really buggy sound subsystem (crashed every few days which made alarms silent and phone calls not work).
A beautiful kernel, hidden under an awful userspace.
Do you think the userspace could have been any better, given the resource constraints of the device and the state of C++ at the time? Would modern C++, or a newer language like Rust, enable a better userspace?
A bit of a tangent: I keep wondering if a modern device that embraces constraints on CPU speed and RAM, similar to Symbian-based or early embedded Linux devices, could achieve outstanding battery life. Your comment the other day about your Psion computer running on two AA batteries made me think again about this. In particular, I wonder how good it could get in a device designed specifically for blind people, and thus having no screen, but only audio output. The big thing I’m not sure about is how much battery power is consumed by CPU and DRAM compared to the wireless radio(s), particularly when wireless connectivity is enabled but the CPU is mostly idle.
Do you think the userspace could have been any better, given the resource constraints of the device and the state of C++ at the time? Would modern C++, or a newer language like Rust, enable a better userspace?
There was PIPS, which provided POSIX (no fork but otherwise most of a POSIX syscall API). This would have, at least, been not worse than Linux / XNU as the system interface and would have made porting *NIX libraries much easier if it were the default. The kernel was explicitly designed to support multiple personalities, with the expectation that the EPOC32 interfaces would not last forever. POSIX probably wasn’t ideal but might have been necessary given that Android and iOS were both POSIX.
The big thing I’m not sure about is how much battery power is consumed by CPU and DRAM compared to the wireless radio(s), particularly when wireless connectivity is enabled but the CPU is mostly idle.
It depends a lot on how much you’re using the wireless. Active use can drain it quite a bit but modern wireless chipsets are really good at entering low-power modes where they are mostly asleep and wake up as passive receivers periodically to see if the base station has told them to wake up properly.
From what I remember, I got about 30 hours of active use from the Psion. I remember it told me how much current it had drawn, but I don’t remember the numbers. Wikipedia tells me that alkaline AAs can provide up to 3.9Wh, so probably somthing like 7-8Wh for a pair of them (that sounds high, I vaguely remember it being closer to 3). At 8Wh over 30 hours, you’re looking at a maximum draw of 250mW. That’s quite a lot for a microcontroller, and I think low-power WiFi chipsets can run in about 50mW (based on a 10-year-old and probably wrong memory), so this seems fairly feasible.
I was a Symbian fanboy and still miss my E61, but the “app store” (it didn’t really exist) was a mess. The entire user-facing software experience apart from the built-in apps was a mess. At one time there were 2 separate web browsers.
Still, WirelessIRC is the best mobile IRC client I’ve ever used to this day.
I think it’s more likely that we’d have seen much more of a push towards mobile-optimised web apps instead of native apps. If no platform had more than 20-30% market share then you could either build 5-6 native apps or one web app for the same market size and that’s a very different proposition to building two for the duopoly.
That is interesting and I can see that fragmentation would favor the web. My understanding from this interview is that Google’s android bet was an attempt to keep the web (and google search accessed via the web) a viable thing in the mobile world and prevent the kind of monoculture that windows had in the desktop OS world pre-web.
Android was probably nothing at all until the iPhone, and indeed, really, until the App Store; Google needed to prevent Apple from becoming Microsoft, as you note. Android has succeeded at its initial intent, and too, has become quite good, which is, if I may be snarky, unusual for a Google product that has achieved its aims.
Since most users use the Play Store, being excluded from there is a problem. I’m really looking forward to a competent antitrust regulator taking a look at this.
Honestly, I suspect app stores are a natural monopoly on a platform, and addressing that is the wrong symptom and likely to make the experience worse. The actual platforms themselves were what what end-users could care and make a choice over.
You’re probably right on PWAs making a lot of sense for the average case that has no need to take advantage of platform APIs though.
The problem isn’t the app store, the problem is the coupling of the Play Store and Play Services, which makes it incredibly difficult to ship a phone that doesn’t send a load of data to Google.
Published versions 10.1.1 and 10.1.2 would wipe all files they could find/touch. 9.2.2 and 11.x would leave you a message on your desktop. Github/npm has removed versions 9.2.2 and the 10.x but left 11.x up. I’m curious what everyone’s take on that is.
I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76
It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.
I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.
Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.
There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.
Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.
There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.
It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.
In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.
Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.
I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.
The space of UI models beyond these constraints is huge.
I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.
I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.
I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)
I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time
I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.
Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.
I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.
My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.
But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.
But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.
I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.
I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.
Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.
I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.
Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.
The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform
I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.
Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.
Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.
My dream is that I fire up Firefox and it doesn’t make a single network request until I click a bookmark or type a URL and hit enter. Do you think there’s any hope of getting that as an option? As it is I’ve found it’s impossible to configure this behavior without external tools.
Unfortunately not. There are many things we can’t do out of the box, like Netflix (DRM), OpenH264(5?). We’ll also need updated info for intermediate certificates and revocations and then updates for the browser itself and addons. I could go on.
Surely it’s technically feasible to invent a pref and put all of those checks behind this pref. But there’s no point in shipping a not-very-usable browser from our perspective. Conway’s law further dictates that every team needs their own switch and config and backend. :) :(
Why do DRM and OpenH264 require network connections on startup?
AFAIK it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine. Sure, you could download it on demand when the user first encounters an H264 stream … but it would put Firefox at an even greater disadvantage compared to browsers willing to pay the MPEG extortion fee.
I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine.
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
I am not the one who asked for this feature, but I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
That’s as far as I know and I’m too lazy to find out more 😝. Maybe the OP was talking about first launch?
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
… I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
It could work as an about:config option, but you would still have to convince someone to spend resources to get it mainlined. Hence why I suggested checking the extension API : )
Given Tor’s threat model, I would assume they would have already done a much more thorough job at eliminating network requests that would compromise privacy. And if not, they would have the organizational capacity and motivation to implement and upstream such a feature. The Tor Browser can be used as a normal browser by disabling Onion routing via an about:config setting.
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
Pre-fetching sometimes eliminates lag and sometimes causes it by taking bandwidth from more important things. Maybe OP meant to argue that these concerns are negligible and not deserving of a configuration option, but it’s hard to infer it from what they wrote.
My guess is that the Mozilla guy didn’t answer the question directly and it probably doesn’t actually download it with every start up as he seemed to imply.
I think it would be fair to include an option to allow power users to pull these updates rather than have these pushed. In the absence of this option, Mozilla is, or is capable of, collecting telemetry on my use of Firefox without my consent and violating the privacy ethos it espouses so much in its marketing.
If you proxy Firefox on launch (on Mac I use CharlesProxy) you can see the huge amount of phoning home it does at launch, even with every available update setting in Firefox set to manual/not-automatic.
This is from the top of my head. There are many differences. But here’s an interesting tradeoff:
Their UI has a native implementation which makes sandboxing and privilege separation easier. We have opted to implement our UI in HTML and JavaScript which is great for shared goals in accessibility, performance improvements, community contributions, and extensibility. But it also means that our most privileged process contains a full HTML rendering engine with JavaScript and JIT and all.
What percentage of the browser do you expect to be able to sandbox in this way? Isn’t there work going on to implement shared memory between WASM modules?
Noise and/or heat is going to be an issue in a 1u server like that. If you want a rack mountable case then aim for a 4u or larger, but really just build a tower if you’re not going to put it in a rack.
This is exactly why I refuse to use npm, pip, etc. I only use the OS’s package manager, which uses a cryptographically signed package repo. I absolutely hate these hacks of workarounds.
I’m sure there are. And I hate that. But at least it’s going through my OS’s package manager, making it easy to use a single interface for auditing potential security issues.
The issue is that sometimes you’re much much behind. For example python-cryptography is still stuck at 3.2.1 on RHEL8…
So either you use pip… or a very old version…
Fortunately, that’s not an issue I have being a BSD user using the nearly-always-up-to-date ports tree. I enjoy up-to-date software on a regular basis. Minimal lag between when a project’s release is published and when the ports tree gets updated to the new version.
The problem with per-language package repos like npm is that anyone and everyone has access to upload their project. That inherently means users must trust the most malicious of developers who upload malware to the repo.
In the case of FreeBSD ports, the ports tree is gated by FreeBSD developers who have the opportunity to audit every single part of creating new ports or updating existing ports. It’s much easier to place trust in a (relatively) small set of developers who ensure sanity before committal.
The package manager I use for my system (FreeBSD’s pkg) makes it incredibly easy to audit packages, even checking something called VuXML to check if any of your installed packages have known vulnerabilities. I can see which files (config, lib, application, etc.) have changed from their default since pkg tracks hashes for each file it installs. Additionally, the package repo itself is cryptographically signed so that it’s not possible to inject malicious code in transit. If the server hosting the package repo is compromised, there’s no problem since the private crypto key material is stored elsewhere. And this bit of crypto is protected by the OS itself.
That’s fine in theory, but when someone packages a program for FreeBSD that uses a language-specific package manager, they use the built-in infrastructure in the ports tree that downloads the dependencies, then packages them in distfiles and records their hash. This is no more secure than pulling from upstream directly. The folks that package things for FreeBSD aren’t auditing the upstream any more than npm / pip / gem / whatever does.
The only thing that the signature gives you is an attestation that the package was built on a FreeBSD build machine and has not been tampered with between there and you by anyone who did not have access to the signing key. It does not give you any assurance that the build machine wasn’t compromised or that there weren’t supply-chain vulnerabilities upstream from the builders.
Most FreeBSD packages don’t use reproduceable builds, so you don’t have any assurance that your packages contain the code that they claimed they did: if you try to rebuild locally from the same port, you may or may not get the same binary. Is the one you got trojaned? Who knows.
pkg audit is great, but npm and friends have similar things that tell you if there are published vulnerabilities in their libraries. They have two problems:
They tell you only about published vulnerabilities. Good projects will go through the process of getting a CVE assigned and doing coordinated disclosure. Others just push out a new version. The auditing tools tell you only about the former.
They are very coarse-grained. They don’t let you know if the vulnerability in a library is on a code path used by anything you have installed and they don’t let you know if that codepath (if it is reachable) is using any data that can be influenced by an attacker. So pkg audit shows a vulnerability in curl’s URL parsing. Does it matter? Is curl used only with trusted URLs? Maybe it’s fine, but can a server-side redirect trigger it?
Sometimes minutes. Sometimes hours. Sometimes days. It depends on the time and resources of a volunteer-run project. For example, I’ve seen FreeBSD update the Tor port just minutes after a new release. FreeBSD generally updates Firefox to RC releases so that we can test what will be the next version before it comes out (which means we have a negative time window in this particular case.)
I wonder why the kernel community seems to have structural issues when it comes to filesystem - btrfs is a bit of a superfund site, ext4 is the best most people have, and ReiserFS’s trajectory was cut short for uh, Reasons. Everything else people would want to use (i.e. ZFS, but also XFS, JFS, AdvFS, etc.) are hand-me-downs from commercial Unix vendors.
On all of the servers I deploy, I use whatever the OS defaults to for a root filesystem (generally ext4) but if I need a data partition, I reach for XFS and have yet to be disappointed with it.
Ext4 is pretty darned stable now and no longer has some of the limitations that pushed to me XFS for large volumes. But XFS is hard to beat. It’s not some cast-away at all, it’s extremely well designed, perhaps as well or better than the rest. It continues to evolve and is usually one of the first filesystems to support newer features like reflinks.
I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions, my best guess as to why it hasn’t is some blend of “not-invented-here” and the fact that ext4 is good enough in 99% of cases.
It would be great if the recent uplift of xfs also added data+metadata checksums. It would be perfect for a lot of situations where people want zfs/btrfs currently.
It’s a great replacement for ext4, but not other situations really.
Yes, I would love to see some of ZFS’ data integrity features in XFS.
I’d love to tinker with ZFS more but I work in an environment where buying a big expensive box of SAN is preferable to spending time building our own storage arrays.
No, you’re right. I can’t find it but I know I read somewhere in the past six months that XFS was getting this. The problem is that XFS doesn’t do block device management which means at best it can detect bitrot but it can’t do anything about it on its own because (necessarily) the RAIDing would take place in another, independent layer.
Yep. I use xfs for 20 years now when I need a single drive FS and I use zfs when I need multiple drive FS. The ext4 and brtfs issues did not increase my confidence.
I’m really glad people are starting to kick the tires on this one.
Reminder that https://pagure.io/pagure exists and is a GPL source control platform. No hosts running this as a service so far as I know, but nothing stops anyone reading this from being that host.
Shame they didn’t enable secret scanning on that repo
It seems they make up CVSS scores for multiple projects?
As someone involved with CVE assigning at work, I know that e.g., https://nvd.nist.gov/vuln/detail/CVE-2022-46883 did not get a CVSS score from us.
I wonder if we omitted a field or if that’s generally the case…
As a matter of practice NVD re-scores
everymost CVEs that comes inhttps://nvd.nist.gov/general/cve-process
This is done because a large number of CVEs that come in are of very low quality and you do need humans in the loop.
If you wrote that CVE you posted then you likely forgot a field if your own score is missing.
We don’t provide cvss scores at all.
that could be it I guess. You might reach out to mitre and/or nvd to ask what’s up
Interesting. Is there somewhere people are talking about this move? I’d love to know if it’s going to be a central index or about how they plan to integrate the package index.
I don’t remember if I’ve heard Apple talk about making a central index before. But the primary discussion spot before implementation, besides formal proposals, is generally the Swift Forums. For what it’s worth, here’s the package manager section.
Awesome. I’ll check it out, thanks!
I guess that means hella- didn’t become official.
Pour one out :(
I’m glad I mostly use distro packages rather than language “package managers”, containers & static linking.
If this is a client-side vuln we’ll also have to worry about the plethora of mobile apps who ship openssl, often unwittingly.
I’m prepared! (I’ve typed out
sudo apt update && sudo apt upgrade
and have my finger hovering over the enter button.)Yup. I’m sure we’ll see a flurry of follow on advisories for the 80 billion packages that thought shipping a bundled version of openssl was a good idea.
“but it’s easier to statically link / use a vendored share library / etc”.
Deferring the tough problems until you’ve got 0-days in your production systems rather than dealing with complexity up front isn’t an especially great idea…
I get the points about static linking, but in reality it’s not that difficult to prompt a rebuild of those packages that statically link openssl.
Assuming you have a list of packages that statically link OpenSSL :)
Anything in rust that uses rust-openssl sometimes statically links OpenSSL…
Thankfully rust’s openssl crate is pretty well maintained and pretty commonly used, so I expect we’ll see an update to that as well on tuesday
How do I update all of the things that transitively depend on it? Across all my machines and all of the containers running on them?
That I don’t know exactly, but cargo should know what versions get used in each builds and checking that against an eventual rustsec advisory shouldn’t be too hard.
Well it turns out that cargo (and rustup) statically link OpenSSL, so depending on the vulnerability, you could hit and RCE when cargo goes to fetch the rust sec advisories. (Like if it’s exploitable in common TLS client usage and someone poisons yr DNS to tell your cargo to talk to their server)
Amusingly, rust OpenSSL bindings are still on the 1.x version: 3.0 proven to be problematic for other reasons as well (build depends on less widely available Perl modules, some perf regressions).
Pray also that no one decided to make one off patches to rename functions or change argument variables
That’s basic package metadata which most package managers use.
Ah, I more mean ad-hoc hand-compiled packages. Sorry I wasn’t more specific.
Really regretting not maintaining a list
Does this package statically link a vendored openssl 3.0? https://crates.io/crates/kv-assets What basic package metadata would indicate that?
Many package managers require a list of dependencies, including compile-time-only deps. I’m not familiar with “crates” but I think rust programs have a Cargo.toml or Cargo.lock listing dependencies? Or does rust allow implicit deps?
Cargo.toml lists immediate dependencies. Cargo.lock lists transitive dependencies.
What about this one? https://crates.io/crates/cargo-deny
https://github.com/EmbarkStudios/cargo-deny/blob/9da1f57bae9304852176adc66a40855688bb3dee/Cargo.lock#L1133
Filtering for applicability is great, and should have been a standard. I’m soooo tired of security tools crying wolf. Just like the desktop anti-virus software, they have lost sight of providing security, and became a security theatre and a numbers meta game.
CVE databases are full of low-effort reports, like flagging of uses of
RegEx
. That’s a slam-dunk “Possible DoS critical vulnerability”.CVSS scoring rules are designed to be cover-your-ass for security vendors, not for usability by people who need to respond to these reports. The scoring rules are “we can’t know if you have written vulnerable code, but you may have, so that’s a critical vulnerability in the code we imagine you may have written”.
If
npm install
reports less than 17 critical vulnerabilities, I double-check if the installation went properly.Hate to say I agree with this. I work in security, and so many of my peers across now several organizations view the reports from these tools as absolute truth rather than raw data/input to further analysis. A vulnerability in a component does not necessarily translate in to a vulnerability in your application, but it is what scanners you and ultimately your customers run will report.
We’ve seriously lost our way. There needs to be more rigor in how these tools are applied. So much focus in “Supply Chain Security” is instead put on shifting responsibility to open source developers who absolutely did not sign up to perform vulnerability verification services for their commercial users.
The world of security research is a weird one. There’s an incentive to get as many CVEs at as high a severity ranking as possible to bolster your resume. No one in hiring seems to care about quality :(
Host here. Curious to think of what Lobsters thinks of this back story of Android. It’s wild to think about what the world might look like if android hadn’t become some dominant and mobile was a Apple (or mircrosoft or whomever) proprietary monoculture.
It’s worth remembering what the smartphone market looked like prior to the iPhone: Symbian had 70% of the market. The Symbian EKA2 kernel is beautiful (and the book about its internals is now a free download, everyone should read it) and Fuchsia’s design shares a lot in common with it. Symbian was moving towards open source and towards using Qt for the GUI (their existing userspace APIs were designed for systems with 2-4 MiB of RAM, which added a lot of cognitive load to solve problems that didn’t really exist on machines with 128+ MiB).
Android successfully killed Windows Phone and Symbian. If it hadn’t happened, then it’s quite likely that an EKA2+Qt stack would have become a major player. I wouldn’t be surprised if we’d ended up with a lot more diversity: I don’t think iOS would have killed Windows Phone and I don’t think either would have killed the EKA2+Qt. The really interesting question is what would have happened with things like Maemo / Meego / WebOS / FirefoxOS in a more competitive smartphone OS market.
The competing operating systems were mostly killed by network effects from the app stores. This is also what’s largely killed Android as an open-source platform: you can run AOSP on a handset, but if you don’t install the Google Play services then a load of third-party apps simply won’t run and you must use Google Play Services for a bunch of features (assuming you want those features) if you want to offer the app in the Google Play Store. Since most users use the Play Store, being excluded from there is a problem. I’m really looking forward to a competent antitrust regulator taking a look at this.
It’s entirely possible that one of the other vendors would have created a decent app store. HP had a very nice model for their WebOS app store where, in addition to their store, they provided the back-end services as something that you could repackage, so you could create your own store for your apps but people would still get all of the update and deployment things that they’d get buying from the HP storefront.
I think it’s more likely that we’d have seen much more of a push towards mobile-optimised web apps instead of native apps. If no platform had more than 20-30% market share then you could either build 5-6 native apps or one web app for the same market size and that’s a very different proposition to building two for the duopoly.
I never experienced that Symbian beauty. I needed to port a simple game to it, and its developer-facing werido C++ side left such a terrible impression on me that I only remember promising myself to never touch Symbian again. I still wouldn’t use Qt.
Anyway, I think Nokia killed itself, and it had nothing to do with the OS kernel. They’ve ignored capacitative screens and tried to keep making featurephones until it was too late. Their popular low-end Symbian was unfit for the smartphone era, and their fancy Symbian, like Windows CE, was a toy desktop OS meant for running spreadsheets, not being a normal phone operated with fat fingers.
I miss WebOS. They did all the right things in software, but fast-enough hardware to run it it didn’t exist yet. They’ve ended up being a free touch-UI R&D for later copying by Apple and Google.
Windows Phone was a decent attempt, but at that point having a 3rd party app ecosystem mattered.
Exactly. A beautiful kernel, hidden under an awful userspace.
I disagree. They had a great kernel and a terrible userland. They tried to fix this by replacing the kernel with Linux, which was completely inappropriate for devices with such a small amount of RAM (Android didn’t become successful until smartphones had at least 512 MiB of RAM). When this failed, they tried to jump on Windows Phone as an alternative.
They had a few teams building some quite nice UIs on top of Qt, but they kept changing the platform out from underneath.
Completely agreed. I got a TouchPad in their free-toys-for-open-source-developers programme and loved it, but they killed the entire ecosystem a few weeks after I got mine.
Yup, my partner had one and it had a great UI (I was astonished - I expected to hate it and ended up liking it more than iOS and Android), but no apps and a really buggy sound subsystem (crashed every few days which made alarms silent and phone calls not work).
Do you think the userspace could have been any better, given the resource constraints of the device and the state of C++ at the time? Would modern C++, or a newer language like Rust, enable a better userspace?
A bit of a tangent: I keep wondering if a modern device that embraces constraints on CPU speed and RAM, similar to Symbian-based or early embedded Linux devices, could achieve outstanding battery life. Your comment the other day about your Psion computer running on two AA batteries made me think again about this. In particular, I wonder how good it could get in a device designed specifically for blind people, and thus having no screen, but only audio output. The big thing I’m not sure about is how much battery power is consumed by CPU and DRAM compared to the wireless radio(s), particularly when wireless connectivity is enabled but the CPU is mostly idle.
There was PIPS, which provided POSIX (no
fork
but otherwise most of a POSIX syscall API). This would have, at least, been not worse than Linux / XNU as the system interface and would have made porting *NIX libraries much easier if it were the default. The kernel was explicitly designed to support multiple personalities, with the expectation that the EPOC32 interfaces would not last forever. POSIX probably wasn’t ideal but might have been necessary given that Android and iOS were both POSIX.It depends a lot on how much you’re using the wireless. Active use can drain it quite a bit but modern wireless chipsets are really good at entering low-power modes where they are mostly asleep and wake up as passive receivers periodically to see if the base station has told them to wake up properly.
From what I remember, I got about 30 hours of active use from the Psion. I remember it told me how much current it had drawn, but I don’t remember the numbers. Wikipedia tells me that alkaline AAs can provide up to 3.9Wh, so probably somthing like 7-8Wh for a pair of them (that sounds high, I vaguely remember it being closer to 3). At 8Wh over 30 hours, you’re looking at a maximum draw of 250mW. That’s quite a lot for a microcontroller, and I think low-power WiFi chipsets can run in about 50mW (based on a 10-year-old and probably wrong memory), so this seems fairly feasible.
I was a Symbian fanboy and still miss my E61, but the “app store” (it didn’t really exist) was a mess. The entire user-facing software experience apart from the built-in apps was a mess. At one time there were 2 separate web browsers.
Still, WirelessIRC is the best mobile IRC client I’ve ever used to this day.
Do you happen to have a link to it?
That is interesting and I can see that fragmentation would favor the web. My understanding from this interview is that Google’s android bet was an attempt to keep the web (and google search accessed via the web) a viable thing in the mobile world and prevent the kind of monoculture that windows had in the desktop OS world pre-web.
Android was probably nothing at all until the iPhone, and indeed, really, until the App Store; Google needed to prevent Apple from becoming Microsoft, as you note. Android has succeeded at its initial intent, and too, has become quite good, which is, if I may be snarky, unusual for a Google product that has achieved its aims.
Honestly, I suspect app stores are a natural monopoly on a platform, and addressing that is the wrong symptom and likely to make the experience worse. The actual platforms themselves were what what end-users could care and make a choice over.
You’re probably right on PWAs making a lot of sense for the average case that has no need to take advantage of platform APIs though.
The problem isn’t the app store, the problem is the coupling of the Play Store and Play Services, which makes it incredibly difficult to ship a phone that doesn’t send a load of data to Google.
Palm may have seen more success with the webos platform.
Also I very much enjoy the show. Just wanted to get that out there!
Probably with https://www.gnu.org/software/parallel/ or just pushing them to background tasks in a bash script
A what now?
Firefox also makes Fenix for Android. Here is some information on profiling Java code for Fenix:
https://wiki.mozilla.org/Performance/Fenix/Profilers_and_Tools
The page even says to not use the Firefox Profiler if you:
Which I guess is now outdated!
TIL that Firefox ships a JDK?
Yeah was surprised by that one too. I assume they forget the Script in JavaScript.
Published versions 10.1.1 and 10.1.2 would wipe all files they could find/touch. 9.2.2 and 11.x would leave you a message on your desktop. Github/npm has removed versions 9.2.2 and the 10.x but left 11.x up. I’m curious what everyone’s take on that is.
I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76
It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.
Well I’d buy a machine, but they’re not selling anything with EU layouts or powercords.
I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.
Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.
There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.
Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.
It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.
In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.
Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.
Ok, so now we have :
kitchen sink / do everything : KDE
MacOS like : Gnome
MacOS lookalike : Elementary
Old Windows : Gnome 2 forks (eg MATE)
lightweight environments : XFCE / LXDE
tiling : i3, sway etc etc (super niche).
something new from scratch but not entirely different : Enlightment
So what exactly can be disrupted here when there are so many options ? What is the disruptive angle ?
I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.
The space of UI models beyond these constraints is huge.
I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.
I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.
I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)
I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.
Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.
I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.
My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.
But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.
I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.
GNOME really isn’t macOS like, either by accident or design.
I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.
Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.
I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.
“The time of native apps is over .. webapps … the future”
Non-rhetorical question: Why is that, though?
Write once, deploy everywhere.
Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.
The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform
I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.
And that’s why the Linux desktop I use most (outside of work) is… ChromeOS.
Now, I primarily use it for entertainment like video streaming. But with just a SSH client, I can access my “for fun” development machine too.
Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.
GNOME HIG also keeps changing, which creates more fragmentation.
20 years ago, they did express a desire of unification: https://lwn.net/Articles/8210/
It certainly is a differentiator.
Hot dog I thought the Enlightenment desktop was dead. Happy to see that it’s not :)
It’s still getting frequent improvements! And Tizen uses EFL widgets. Enlightenment desktop even has full Wayland support!
This is a very…. non-nuanced title. But hey, who am I to disagree. Anyway, shoot if you have questions :)
My dream is that I fire up Firefox and it doesn’t make a single network request until I click a bookmark or type a URL and hit enter. Do you think there’s any hope of getting that as an option? As it is I’ve found it’s impossible to configure this behavior without external tools.
Unfortunately not. There are many things we can’t do out of the box, like Netflix (DRM), OpenH264(5?). We’ll also need updated info for intermediate certificates and revocations and then updates for the browser itself and addons. I could go on.
Surely it’s technically feasible to invent a pref and put all of those checks behind this pref. But there’s no point in shipping a not-very-usable browser from our perspective. Conway’s law further dictates that every team needs their own switch and config and backend. :) :(
Why do DRM and OpenH264 require network connections on startup?
I also don’t see how adding an option would render the browser not-very-usable, perhaps you meant something else?
AFAIK it’s a legal work-around: Mozilla can’t distribute an H264 decoder themselves so they have users (automatically) download one from Cisco’s website on their own machine. Sure, you could download it on demand when the user first encounters an H264 stream … but it would put Firefox at an even greater disadvantage compared to browsers willing to pay the MPEG extortion fee.
Obligatory Coding Horror link ; ). What you are looking for should be possible with proxies on Firefox (but not Chrome last I checked). I would suggest checking out the Tor browser fork and the extension API.
Wouldn’t Firefox download it whenever it updates itself? Not every time it starts up?
I am not the one who asked for this feature, but I’m sure they would be fine with an option in about:config. Failing that, a series of options to disable features that make unprompted requests would at least get them closer (some of the aforementioned features already have that).
That’s as far as I know and I’m too lazy to find out more 😝. Maybe the OP was talking about first launch?
Regardless of the exact legal and technical rationale, a web browser’s job is to display content to the user as fast as possible and pre-fetching resources eliminates lag. Whether that is checking for OpenH264 updates or simple dns-prefetching, the improvement in UX is what justifies the minimal privacy leakage from preemptively downloading oft-used resources. Or, at least that is what I think the OP was trying to get across : )
It could work as an about:config option, but you would still have to convince someone to spend resources to get it mainlined. Hence why I suggested checking the extension API : )
Given Tor’s threat model, I would assume they would have already done a much more thorough job at eliminating network requests that would compromise privacy. And if not, they would have the organizational capacity and motivation to implement and upstream such a feature. The Tor Browser can be used as a normal browser by disabling Onion routing via an about:config setting.
Pre-fetching sometimes eliminates lag and sometimes causes it by taking bandwidth from more important things. Maybe OP meant to argue that these concerns are negligible and not deserving of a configuration option, but it’s hard to infer it from what they wrote.
Not being privy to the details myself; I could see that count as “distribution” where download on boot does not. #NotALawyer
My guess is that the Mozilla guy didn’t answer the question directly and it probably doesn’t actually download it with every start up as he seemed to imply.
I think it would be fair to include an option to allow power users to pull these updates rather than have these pushed. In the absence of this option, Mozilla is, or is capable of, collecting telemetry on my use of Firefox without my consent and violating the privacy ethos it espouses so much in its marketing.
If you proxy Firefox on launch (on Mac I use CharlesProxy) you can see the huge amount of phoning home it does at launch, even with every available update setting in Firefox set to manual/not-automatic.
Mozilla seems to be running in the opposite direction with sponsored links showing up now in the new tab page, etc. I could be wrong though…
Serious question: What do you think could Firefox learn from Chrome security? For example, where does Chrome better?
This is from the top of my head. There are many differences. But here’s an interesting tradeoff:
Their UI has a native implementation which makes sandboxing and privilege separation easier. We have opted to implement our UI in HTML and JavaScript which is great for shared goals in accessibility, performance improvements, community contributions, and extensibility. But it also means that our most privileged process contains a full HTML rendering engine with JavaScript and JIT and all.
Has there been any consideration of tools like Caja to sandbox the JS that runs in that process?
Caja is for JS<>JS isolation, but the main threat here is in JS escaping to native code (e.g. through a JIT bug), where Caja has no power.
We’ve been using several restrictions in terms of what our UI code can do and where it can and cannot come from. E.g., script elements can’t point to the web but rather inside the Firefox package (e.g., the
about
URL scheme). We’ve also implemented static analysis checks for obvious XSS bugs and are using CSP. We’ve summarized our mitigation in this fine blog post here: https://blog.mozilla.org/attack-and-defense/2020/07/07/hardening-firefox-against-injection-attacks-the-technical-details/Well, if not the most secure web browser on the market, then definitely the second most secure! (never mind that there are only two)
I’m really glad to see this kind of partitioning being done!
What percentage of the browser do you expect to be able to sandbox in this way? Isn’t there work going on to implement shared memory between WASM modules?
Noise and/or heat is going to be an issue in a 1u server like that. If you want a rack mountable case then aim for a 4u or larger, but really just build a tower if you’re not going to put it in a rack.
Is this still on track to get merged into the kernel?
This is exactly why I refuse to use npm, pip, etc. I only use the OS’s package manager, which uses a cryptographically signed package repo. I absolutely hate these hacks of workarounds.
And you are sure that zero packagers use NPM or pip as a source for the OS packages and not the source repo? (Am I being paranoid now?)
I’m sure there are. And I hate that. But at least it’s going through my OS’s package manager, making it easy to use a single interface for auditing potential security issues.
The issue is that sometimes you’re much much behind. For example python-cryptography is still stuck at 3.2.1 on RHEL8… So either you use pip… or a very old version…
Fortunately, that’s not an issue I have being a BSD user using the nearly-always-up-to-date ports tree. I enjoy up-to-date software on a regular basis. Minimal lag between when a project’s release is published and when the ports tree gets updated to the new version.
How is this different than using pip? You manually download the file?!
The problem with per-language package repos like npm is that anyone and everyone has access to upload their project. That inherently means users must trust the most malicious of developers who upload malware to the repo.
In the case of FreeBSD ports, the ports tree is gated by FreeBSD developers who have the opportunity to audit every single part of creating new ports or updating existing ports. It’s much easier to place trust in a (relatively) small set of developers who ensure sanity before committal.
The package manager I use for my system (FreeBSD’s
pkg
) makes it incredibly easy to audit packages, even checking something called VuXML to check if any of your installed packages have known vulnerabilities. I can see which files (config, lib, application, etc.) have changed from their default since pkg tracks hashes for each file it installs. Additionally, the package repo itself is cryptographically signed so that it’s not possible to inject malicious code in transit. If the server hosting the package repo is compromised, there’s no problem since the private crypto key material is stored elsewhere. And this bit of crypto is protected by the OS itself.That’s fine in theory, but when someone packages a program for FreeBSD that uses a language-specific package manager, they use the built-in infrastructure in the ports tree that downloads the dependencies, then packages them in distfiles and records their hash. This is no more secure than pulling from upstream directly. The folks that package things for FreeBSD aren’t auditing the upstream any more than npm / pip / gem / whatever does.
The only thing that the signature gives you is an attestation that the package was built on a FreeBSD build machine and has not been tampered with between there and you by anyone who did not have access to the signing key. It does not give you any assurance that the build machine wasn’t compromised or that there weren’t supply-chain vulnerabilities upstream from the builders.
Most FreeBSD packages don’t use reproduceable builds, so you don’t have any assurance that your packages contain the code that they claimed they did: if you try to rebuild locally from the same port, you may or may not get the same binary. Is the one you got trojaned? Who knows.
pkg audit
is great, but npm and friends have similar things that tell you if there are published vulnerabilities in their libraries. They have two problems:pkg audit
shows a vulnerability in curl’s URL parsing. Does it matter? Is curl used only with trusted URLs? Maybe it’s fine, but can a server-side redirect trigger it?How minimal?
Sometimes minutes. Sometimes hours. Sometimes days. It depends on the time and resources of a volunteer-run project. For example, I’ve seen FreeBSD update the Tor port just minutes after a new release. FreeBSD generally updates Firefox to RC releases so that we can test what will be the next version before it comes out (which means we have a negative time window in this particular case.)
So basically the same boat as rhel then
I wonder why the kernel community seems to have structural issues when it comes to filesystem - btrfs is a bit of a superfund site, ext4 is the best most people have, and ReiserFS’s trajectory was cut short for uh, Reasons. Everything else people would want to use (i.e. ZFS, but also XFS, JFS, AdvFS, etc.) are hand-me-downs from commercial Unix vendors.
On all of the servers I deploy, I use whatever the OS defaults to for a root filesystem (generally ext4) but if I need a data partition, I reach for XFS and have yet to be disappointed with it.
Ext4 is pretty darned stable now and no longer has some of the limitations that pushed to me XFS for large volumes. But XFS is hard to beat. It’s not some cast-away at all, it’s extremely well designed, perhaps as well or better than the rest. It continues to evolve and is usually one of the first filesystems to support newer features like reflinks.
I don’t see why XFS couldn’t replace ext4 as a default filesystem in general-purpose Linux distributions, my best guess as to why it hasn’t is some blend of “not-invented-here” and the fact that ext4 is good enough in 99% of cases.
It would be great if the recent uplift of xfs also added data+metadata checksums. It would be perfect for a lot of situations where people want zfs/btrfs currently.
It’s a great replacement for ext4, but not other situations really.
Yes, I would love to see some of ZFS’ data integrity features in XFS.
I’d love to tinker with ZFS more but I work in an environment where buying a big expensive box of SAN is preferable to spending time building our own storage arrays.
I’m not sure if it’s what’s you meant, but XFS now has support for checksums for at-rest protection against bitrot. https://www.kernel.org/doc/html/latest/filesystems/xfs-self-describing-metadata.html
This only applies to the metadata though, not to the actual data stored. (Unless I missed some newer changes?)
No, you’re right. I can’t find it but I know I read somewhere in the past six months that XFS was getting this. The problem is that XFS doesn’t do block device management which means at best it can detect bitrot but it can’t do anything about it on its own because (necessarily) the RAIDing would take place in another, independent layer.
It is the default in RHEL 8 for what it’s worth
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_getting-started-with-xfs-managing-file-systems
Yep. I use xfs for 20 years now when I need a single drive FS and I use zfs when I need multiple drive FS. The ext4 and brtfs issues did not increase my confidence.
It is the default filesystem in Fedora since 33.
And in suse enterprise since 2014. It still needs work
https://documentation.suse.com/sles/15-SP1/html/SLES-all/cha-filesystems.html
https://en.wikipedia.org/wiki/SUSE_Linux_Enterprise#End-of-support_schedule