Raku handles this nicely, I asked if a certain thing should work a certain way, it was near instantly added to the specification, for implementations to align against at a later date.
Separating triage and implementation becomes really clean if you separate specification and implementation.
I dunno, that has problems as well. If the behaviour gets implemented then people will try it out and decide if they actually like it and agree with it (in the betas for the new release.or whatever). Actually having to implement stuff also means the implementer has to think about it more.
Considering that π generally is not used in any “correct” (la pi-does-not-exist) form in most calculations (not even at NASA), I do wonder how many curves would be needed to reach that “NASA approved circle”, 4 is already almost spot on, 8? 12?
If you use four Béziers, it’s off by 2.7e-4;
with eight Béziers, it’s off by 4.2e-6;
16, 6.6e-08;
32, 1.0e-09;
64, 1.6e-11;
128, 2.5e-13;
256, 4.4e-15.
So four isn’t enough to be pixel-perfect, but eight probably is pixel perfect up to a few thousand PPI.
This is cool! I think peg board is ideal for power supplies and cables. In my last place, I got some from the hardware store (pretty cheap by the sq. ft.), and put it inside in Ikea furniture. It helps when you need to slide things in and out.
The cables and power are the things that make a mess… You might have
a tiny Raspberry Pi
a small Intel NUC
a small external hard disk enclosure, or NAS
a small router
And then all the power bricks take up more space than the devices themselves.
I always wanted a desk with two layers, so you could put a solid surface over the peg board, to hide it.
I wouldn’t want to hang it up high in visible place, but that works too.
I feel like the ideal thing for a bunch of relatively low powered items would be a USBC powerboard (240W 8 USBC sockets), but I’ve never seen something like that yet. But after that, having little devices that request the correct power (5V for a pi, 12V for a PC (obviously ditching a full PSU for a mini one), so on and so fourth) that then plug into the end devices, seems very clean and tidy. Costs would balloon however.
Maybe something from Anker or Monoprice, but I suspect you’re not going to see more than 5-6 ports due needing to satisfy the wide mix of USB power standards for something the average person would buy.
Actually one thing that would be cool is to just put a window shade over it. In my last place I also had a perfectly fitting translucent window shade to cover a laundry cubby, so you could do the same thing with the home lab on a wall. So you save space, but also don’t have a mess of electronics in view.
<dl>
<dt>A<dd>This comment will express an approach that
<dl>
<dt>1<dd>Provides the definitions to section references
<dt>2<dd>Allows manually choosing section references
</dl>
</dl>
I find ol to be more of an array-ish thing: Expressing order in relation to other items, if its a key-value thing, dl fits the problem better.
Hi. I’m the creator of Vento. Thanks for sharing my project and the interesting comments.
Regarding to autoescaping:
In my case, I like to be able to insert html code everywhere, without changing the templates. For example, let’s say I have the title The best movies and want to make “movies” bolder. I like to be able to just write The best <strong>movies</strong> (or The best **movies** if I use markdown), without having to change the template. This is something that happens to me a lot, and with nunjucks I had to add the safe filter everywhere. But I understand that not everybody has the same needs, so an option to autoescape by default is a good idea. I’d add it to the next version.
const noEscape = env.filters.noEscape;
const title = noEscape('The best <strong>movies</strong>')
const result = await env.run("t.vto", { title: title });
Would be how I’d imagine I’d approach that in a hypothetical vento-with-autoescaping future.
Am I blaming the rise of fascism and the downfall of Western civilization on the W3C’s pig-headed and flawed implementation of the OL tag in HTML?
A little, yes.
Sigh. Even if the point of the article - how OL tags are content, not presentation - would be correct, extrapolating to the entire HTML spec being broken, and then going face first into Godwin’s law is a bit much for this reader.
I don’t think Godwin’s law really covers this case. The article doesn’t make an earnest comparison to nazi germany and is self-aware about the statement being hard to swallow. I take the points that HTML is in a unique position to be a democratizing technology, but due to a technicality HTML was not kept as relevant to communicating law content as it was for science, and that could have negative effects on the breadth of the audience that observes and critiques law.
HTML really isn’t that good for communicating science, either, which is why most science journals are still in PDFs. There’s still no HTML element for a footnote!
Similarly, some template formats are somewhat portable between implementations, nunjucks and jinja2, or the mustache family of templating tools.
Downside is that, you can perish the thought of being able to consider sub-templates as equivalent to client-side components as JSX can be: So while you can render all of these on many server-side platforms: pretty much forget about effortlessly porting such code to run on the client side with lifetimes and swapping things in and out and all that dynamic goodness
The biggest issue to me is escaping. I would not use Vento, because it apparently does nothing about escaping intentionally. Discarding the alternative Nunjucks:
By default, all variables are escaped, so you have to remember to use the safe filter everywhere. This is not very convenient for my use case (static site generators), where I can control all the content and the HTML generated.
I’m reading that as the author wants
var myvar = '<tag>'
<p>{{ myvar }}</p>
to produce <p><tag></p> not <p><tag></p> because he doesn’t consider any of his data untrusted. OK, people can choose whatever they want, but it’s not a good design IMO.
Kind reminds me of people who store HTML directly in the database, or store HTML-escaped data in the database … a big confusion of logic vs. presentation IMO. Not sure I could live in such a codebase.
I think textual templating with auto-escaping can be comparable to JSX. I think the main advantage would be getting URL escaping, and maybe some consideration for CSS or SVG. I just tried a little snippet and it appears JSX doesn’t know anything about URLs.
I think JSX is generally a pretty good idea, but it’s not free, requiring language support / a compilation step. And maybe it doesn’t go far enough – it’s specific to HTML/XML.
Yeah, I wish Vento had autoescaping at least as an option. Escaping is not just for XSS – sometimes you just have text that wasn’t written for an HTML context.
I like the idea of JSX, but prefer using textual templating in practice – mostly due to my love of leaving out closing tags.
I searched up “sun position in melborune” and followed a link and I got the answer without using any concept of time.
Do normal humans publish “waking hours”? Not typically.
They should though. From the other angle, if I said “I’m awake from 12 to 21, or whatever, then you would know — without conversion — what times I am awake. With timezones, you need to go through and convert two lots of time,
just because [something] doesn’t mean Uncle Steve will be awake.
And can’t that be said for any setup?
Applying one set of conveniences to 1 side (the mind to just google the answer), and then applying vastly the opposite to the other side… ugh!
The daemontools design allows for various helper tools. One of them is svok, which finds out whether a given service is running. This is just another Unix program that will exit with either 0 if the process is running, or 100 if it is not.
Thats cool and all but I wouldn’t call that a dependency. For it to work, the depended on service needs to be running, doesn’t need to be in a working state, just running. And it needs to be running regardless of any services that depend on it.
systemdmy beloved will instead, when running a service A that depends on B, will also start B, and optionally wait for B to finish starting and become usable, and constantly check that its not deadlocked or the such (also optional)(and not often implemented).
If nothing depends on B? B never runs.
Allow User-Level Services
Isn’t that generally a minefield to implement? I’ve read so many articles about doing that the wrong way, I don’t think I could trust there is a right way without a lot of auditing of a mechanism that is insanely complex in its nuances. Having something I can lean onto, that I know I can lean onto, is appreciable.
Variations
And isn’t that the result? You can do anything through executables, as a result, everyone needs to do everything but everyone does everything a different way, leading to things that sometimes work, sometimes work together, sometimes don’t work together, etc. Eventually people need to remove foot-guns at the root of all things.
Allow User-Level Services
Isn’t that generally a minefield to implement? I’ve read so many articles about doing that the wrong way, I don’t think I could trust there is a right way without a lot of auditing of a mechanism that is insanely complex in its nuances. Having something I can lean onto, that I know I can lean onto, is appreciable.
What’s a minefield about this? Any user can run a service with nohup or screen. Because of how small each part of daemontools/runit is, the setuidgid executable literally just changes uid and gid before executing the command.
Systemd will manage services running as user and is much more mysterious from an admin perspective as to how it does it. I trust it, but what’s great about daemontools is that you always know exactly what’s happening. You don’t say, “Please figure out how to run this process as a different user.”, you say, “Execute this process with a different uid and gid”
Whether you trust that changing the uid and gid is giving you the isolation you want is up to you, but the daemontools way is to have one tiny program in charge of that so that you can be pretty confident when it is or isn’t happening, and so that the code is very easy to audit.
key to daemontools, as well as qmail, is that for many decades (maybe still?) djb had a personally funded security bounty that i do not think anyone ever cashed in on.
Thanks for pointing that out. The entire page layout sets off blogspam red flags for me, and I didn’t recognize it as a Tumblr blog.
I went down a bit of a rabbit hole investigating this because I remember it from way back, and I’m afraid I accused the author of copying his own original work…
Compatibility issue with existing bootloaders would probably prevent you to use anything other than TPM 2.0. One second obstacle is that discrete TPMs are connected to your motherboard through an I2C bus, while the current TKey needs a USB port.
In principle though, they’re both HSMs with similar capabilities (though the TKey is more flexible). I see no obstacle to using the TKey to do secure boot, especially on an embedded device you can write all the firmware from.
(The one thing I haven’t wrapped my head around yet is how secure boot can even work with a discrete HSM: what prevents me from sending one bootloader to the HSM, and then booting with another bootloader anyway? I feel like there’s a missing component that should enforce the submitted and actual bootloader to be one and the same.)
I think you have to trust some of the mobo firmware/hardware to be tamper-resistant. And the security of the system depends on that tamper resistance.
A separate portable HSM like the Tkey could help a bit by forcing an attacker to get the contents of the internal HSM (so they can imitate it), which is harder than bypassing it. But you can do similar with only an internal HSM by requiring key material from the internal HSM to decrypt your drives, so it doesn’t seem like a big deal to me.
But if the attacker completely controls the laptop without you knowing, including the internal HSM, then I don’t think there’s any way a discrete HSM can help.
But if the attacker completely controls the laptop without you knowing, including the internal HSM, then I don’t think there’s any way a discrete HSM can help.
Yes, this is one thing I didn’t quite know (only suspected) and have been convinced of only the last few days. With a discrete TPM we can always send one bootloader for measurement, and execute another anyway. This opens up the TPM, and if BitLocker/LUKS didn’t require a user password we should be able to decrypt the hard drive despite using hostile/pirate/forensic tools instead of the expected chain of trust.
This means I wrote a mistake in my “Measured Boot” section of my post, I’ll correct it. And ping @Summer as well: sorry, the answer is no, because even though you could use a discrete TKey-like chip to plug into the I2C bus of your motherboard instead of a TPM, discrete chips most likely don’t actually work for secure boot. It only takes one Evil Maid to insert a TPM-Genie or similar between the motherboard and the discrete chip, and it’s game over.
But. If hardware security is not a goal, I think there is a way. If the motherboard can guarantee that it sends the bootloader it will execute over the I2C bus, and nobody “borrows” your laptop to insert a Chip in the Middle, then you can guarantee that the bootloader you execute is the one that is measured. Therefore, no software bootloader-altering attack will survive a reboot.
As another note, if an attacker has physical access to the machine (required to break into the internal HSM unless it is buggy), then they can do lots of other attacks if they instrument the laptop and then return it to you:
reading RAM (this isn’t a problem if your CPU transparently encrypts RAM, but I don’t know if any CPU does this)
attaching a device to the keyboard and then using that remotely as a keylogger or to type commands into the laptop once the user has already authenticated themselves (or similar for the pointer or any other trusted input peripherals)
attaching a device to the display and recording or transmitting what it sees (you can fit a long screen recording on a 1TB microSD if you’re okay with a low refresh rate)
I think really good tamper-resistant cases (if such things exist) or rigorous tamper-detection procedures are maybe the more important defence than clever tricks with HSMs for organisations that want to protect against physical attacks on portable hardware.
reading RAM (this isn’t a problem if your CPU transparently encrypts RAM, but I don’t know if any CPU does this)
Not mainstream, but:
The Xbox CPUs have done this since the Xbox 1.
AMD SEV does this, but in a stupid way. SEV-SNP does it in a somewhat less stupid way.
Intel TME does this with or without VMs, TDX then relies on it.
Arm CCA makes it optional (I think, possibly only encryption with multiple keys is optional) because they want to support a variety of threat models, but the expectation is that most implementations will provide encryption.
Neat. Thanks for sharing. If the hardware is fast enough then I’d hope that this becomes universal to protect against physical attacks and rowhammer-style attacks.
I’m not sure it protects against rowhammer, you can still induce bit flips they’ll just flip bits in cyphertext, which will flip more bits in plaintext. It may make toggling the specific bits in the row that you’re using harder. Typically these things use AES-XTS with either a single key or a per-VM key, so someone trying to do RowHammer attacks will be able to deterministically set or clear bits, it will just be a bit tricky to work out which bits they’re setting.
On the Xbox One, this was patched into the physical memory map. The top bits of the address were used as a key selector. On a system designed to support a 40-bit physical address space with 8-12 GiB of physical memory, there was a lot of space for this. The keys for each key slot were provisioned by Pluton, with a command exposed to the hypervisor to generate a new key in a key slot. Each game ran in a separate VM that had a new key. For extra fun, memory was not zeroed when assigning it to game VMs (and, from the guest kernel to games) because that was slow, so games could see other game’s data, encrypted with one random AES key and decrypted with another (effectively, encrypting with two AES keys).
Sure, you’ll probably still be able to flip bits by hammering rows, but I think gaining information with it will be much harder with encryption. I’m not 100% on any of this, but I think not having knowledge of the bit patterns physically written to disk may make it more difficult to flip bits, too.
Can you specify what was stupid about AMD-SEV? I tried to work with it and remember being disappointed that it didn’t run a fully encrypted VM image out of the box, but you may have something more precise in mind?
SEV was broken even before they shipped hardware. The main stupidity was that they didn’t HMAC the register state when they encrypted it, so a malicious hypervisor could tamper with it, see what happened, and try again. Some folks figured out that you could use that to find the trap handler code in the guest and then build code reuse attacks that let you compromise it. They fixed that specific stupidity with SEV-ES, but that left the rollback ability which still lets you do some bad things. SNP closes most of those holes (though it doesn’t do anti rollback protection for memory, so malicious DIMMs can do some bad things in cooperation with a malicious hypervisor) and is almost a sensible platform. Except that they put data plane things in their RoT (it hashes the encrypted VM contents, rather than doing that on the host core where it’s fast and just signing the hash in the RoT) and their RoT is vulnerable to glitch injection attacks (doubly annoying for MS because we offered them a free license to one that is hardened against these attacks, which they’d already integrated with another SoC, and they decided to use the vulnerable one instead).
I worked with Arm on CCA and it’s what I actually want: it lets you build a privilege-separated hypervisor where the bit that’s in your TCB for confidentiality and integrity is part of your secure boot attestation and the bit that isn’t (which can be much bigger) can be updated independently. It’s a great system design that should be copied for other things.
Looks I underestimated how difficult tamper resistance actually is. Looks like the problem is actually fundamentally unsolvable: they take my machine, modify it a little bit, and I get an altered machine that spies on my keystrokes, on my screen… so realistically, if I have reason to believe my laptop may have been tampered with, I can no longer trust it, and I need to put it in the trash bin right away… Damn.
At least the “don’t let thieves decrypt the drive” problem is mostly solved.
I’m sure I’m telling you things you already know, but with security you have to have a threat model or it’s just worrying/paranoia. Probably nobody in the whole world is interested in doing an evil-maid attack on you, and even if they are interested they would probably prefer a similar but less tough target if you took basic steps to protect yourself. If you have a threat model then you can take action. If you assume that your attacker has infinite motivation and resources then there is nothing you can do.
I’m sure I’m telling you things you already know, but with security you have to have a threat model or it’s just worrying/paranoia.
I have two relevant-ish anecdotes about that.
The first dates back years, I was attending a conference on security, mostly aimed at people who are more likely to be targetted than others. Because their work is not only sensitive, but because it is controversial. The presenter considered the Evil Maid attack likely enough that it was irresponsible to leave your laptop unattended at the hotel. That special precautions should be taken if you were ask to hand over your phone before entering a… microphone-free room (do you actually trust whoever took your phone not to try and tamper with it?) So as much as I don’t think those things apply to me right now, it does apply to some people, and it’s nice to give them (and perhaps more importantly, the Snowdens among them) options.
The second one was me working on AMD SEV at the beginning of this very year. So we had this “sensitive” server app that we were supposed to sell to foreign powers. Since we can’t have them peek into our national “secret sauce” (a fairly ordinary sauce actually), we were investigating thwarting reverse engineering efforts. Two options were on the table, code obfuscation and DRM. Seeing the promise of AMD SEV we went the DRM route.
Cool, so AMD SEV transparently encrypts all RAM. But once I got to actually set it up, I realised it wasn’t an all-encompassing solution: AMD-SEV doesn’t run a fully encrypted virtual machine, only the RAM is encrypted. All I/O was left in plaintext, and the VM image was also in cleartext. Crap, we need full disk encryption.
But then we have two problems: where do we put the encryption keys? Can’t have our representative type a password every time our overseas client wants to reboot the machine. So it must be hidden in the TPM. Oh and the TPM should only unlock itself if the right bootloader and OS is running, so we need some kind of Trusted Boot.
At this point I just gave up. I was struggling enough with AMD SEV, I didn’t want to suffer the TPM on top, and most of all I neither believed in their threat model nor liked their business model. I mean, the stuff was a fancy firewall. We make extra effort to hide our code from them, and they’re supposed to trust us to secure their network?
So as much as I don’t think those things apply to me right now, it does apply to some people, and it’s nice to give them (and perhaps more importantly, the Snowdens among them) options.
Sure, and I hope you were able to find a job/project more to your tastes after the firewall DRM project.
I originally set a few symbols to chords but find hotstrings both easier to remember and more extensible. At this point I have over 70 hotstrings; remembering 70 chords would make my head explode, much less fitting them all on the keyboard =)
The Julia editor plugins and repl let you enter latex commands for the symbols then press tab to convert them. It works pretty well, though I would prefer something that let me fuzzy search based on longer descriptions of the symbols and showed a preview of what they looked like, maybe I should write that.
(keep in mind that it is eager: that is, you cannot have multiple sequences sharing the same prefix - “in” will mach before “into” or “infty”)
If you use Emacs, then you have really too many ways to approach this. The one I use the most is abbrev-mode, with e.g. (they are spread over multiple different tables in my init.el):
Of course, Emacs has something like this already built-in: set-input-method (C-x RET C-\) allows you to choose TeX, which lets you use LaTeX-like syntax for many symbols (e.g, \in, \infty, \exists), and insert-char (C-x 8 RET) gives you access to all of Unicode.
This question made me want to look into it for my own use:
For Gnome, IBus is used, which can read a .XCompose file, which can be formatted like
include "%L"
<Multi_key> <e> <l> <o> <f> : "∈" U2208
which gives me access to such a symbol using the Compose key
A chorded approach might be possible, Custom keyboard layout with the relevant symbols as the third option (shift is second, usually right alt is third), or a plugin for ibus might also be appropriate.
I use WinCompose (the docs recommend it, but I already had it installed for other reasons). It makes characters very easy to enter - Alt+>> for », Alt+{( for ⊂, etc. I had to put in my own sequence for 「」, which it also makes easy.
Those of us creating medical devices, software for factory lines, and systems that matter need a minimum of 15 years for an LTS especially if you have it in your fool head we should pay money for it.
I don’t understand dictating design decisions without involving any invested parties, e.g. publishers or browser makers
Outside of those mentioned, Servo, Flow, & Ladybird, would greatly benefit such a project.
A less socially taxing direction would be to make a can-I-use of the smaller browsers that exist, so publishers can target such browsers a lot more easily.
I think that the best way to build something like this is to build two versions:
One that uses existing web technologies and runs in a browser.
One that runs natively.
If you use WebAssembly and Canvas, you have most of what I want from a lightweight application delivery system (which seems to be what this is aiming for, rather than the document model goal of some other things in this space). If you build something that runs in the browser then people can build applications targeting it, and if it really is a good design then your native version will be faster or use less memory, so you can encourage people to use it for the sites that support it.
Have you seen handmade.networks’ Orca (I specify because there are only like 50 software projects called Orca)? It’s based on WebAssembly and GLES rather than wasm and Canvas. Basically the idea is wanting to do the same thing as web applications, but without building the UI layer on top of a document rendering platform and 20 years of historical accidents.
It requires checking by the recipient that they received the honest payload, yes?
https://www.rfc-editor.org/rfc/rfc6920 seems like a more accessible, offline capable (la local-network only, no contract system required, can verify the data as soon as its received), option. which gives you something like nih:sha-256-32;53269057;b, and is inherently portable, if a server doesn’t have a file, it can proxy through to another server and duplicate it, etc. Clients can switch servers since ni:/nih: is made to work without authority
My short form was uM73gznF, with the capitals, makes it a bit hard to verbalize, which would be one major use-case for shorter urls. ’Course this is resolvable
There are two possible explanations for this effect
There is way more than two
Advocates of [strong artificial intelligence] are keenly influenced by modern computers. Brains are like computers, they reason, and minds are like computer programs; so a mind can exist on any suitable computer, whether it’s made of gray matter, silicon, or a ‘suitably arranged collection of beer cans.’ Consciousness isn’t anything special, protagonists would argue, it’s just something that happens when you run the right computer program.
Well, the tech industry just isn’t that good at software.
It took only, what, 3 people to invent the transistor? Most programming languages and tools were made by tiny amounts of people. The industry isn’t that good, sure, but individuals have managed time and again to make leaps of progress
If developers genuinely need experimental features to write software in Rust, is it really ready for production use?
The author concedes earlier in the piece that developers don’t actually need the features, in the sense of “writing production-quality libraries is impossible without these features”. Rather they “need” the features in the sense of “the programming experience is much nicer with these features than without”.
This is then parlayed somehow into “move fast and break things” and “stability appears to be less of a priority” and other cheap smears.
What’s worse is the lack of commitment to the bit. The author claims to be perfectly willing to wait for a new official compiler release to contain new features, for example (though also claims not to know what compiler version is being used in Go or Java projects, which both contradicts the “willing to wait” claim and is just not super likely to be true – Java is sort of infamously sensitive to what version you’re using, for example). But that opens up, say, Go to exactly the same criticism. Suppose I were to write something like:
Go went “stable” in 2012 with the release of 1.0, but is now, just over a decade later, on 1.20 and prepping 1.21. Really? They were missing or messed up so much stuff it’s taken them over two releases a year for a decade to fix, with no end in sight? And you expect me to trust this for reliable, stable systems? LOL! I guess getting things right just isn’t a priority for Go!
This would be about as honest and about as accurate and about as helpful as the author’s ranting about Rust.
No, but cheap smears against Rust are easy to emulate against other languages. If someone wants to claim that, say, programmers’ desire to use new features is a flaw in Rust, then I’m going to retort that the desire for Go to keep developing new features must also be a flaw. Can’t have the one complaint without allowing the other.
Really? They were missing or messed up so much stuff it’s taken them over two releases a year for a decade to fix, with no end in sight? And you expect me to trust this for reliable, stable systems? LOL! I guess getting things right just isn’t a priority for Go!
This uncharitable portrayal of the author’s attitude seems baseless to me. Nowhere does the author laugh at Rust (“LOL!”), allege that “getting things right just isn’t a priority for” Rust, or suggest that Rust’s continuing development is caused by bad design (having “messed up so much stuff”).
Rust is my favorite programming language, and I disagree with the submission, yet I sympathize more with the submission than with your comment.
More than the feature itself, it’s the air of mystery around it that baffles me. What are you protecting me against? On which sites? Why does Mozilla’s seal of approval bypass this, rather than giving the user the option to selectively enable those add-ons on specific sites? The permissions system already has a way to grant add-ons permissions to specific pages only, why is this additional layer necessary?
One of the quirks of working in the open and of working in a large organization is that often the code and the communications prep get out of sync. If you decide you’re shipping on a fixed schedule and your communications event falls off-cycle then you have to decide whether to ship something and then explain it or explain something and then ship it.
From the language of “quarantined domains” and browsing through the related bugzilla entries I get a very different sense than the author of this post has. It isn’t “quarantined extensions” and the user will have the ability to override so this really seems like a usability kludge rather than a market power grab. The author of this post shows the extension panel with the warning but doesn’t seem to link to the same page as that “Learn more” link. That page clearly reiterates that these are to improve security for users and the user will have the ultimate decision.
If I had to guess, I would say that these are related to accidental disclosures of sensitive information from sites to extensions, and the confidentiality is to allow the affects sites to finalize their own messaging about the potential security issues for their users. Just a guess.
My guess is that google/youtube said to mozilla: “look, you have to stop with these download extensions because copyright blablablabla or otherwise we lock you out of youtube and you will be irrelevant.” I am not saying that is what happened, but that is how it feels to me.
Where is the youtube thing coming from? Youtube only appeared in the blog post because it was manually added by the author for demonstration purposes. Mozilla isn’t actually blocking any extensions on youtube. Right now the restricted domains list is empty, but if I had to guess I’d think the restricted domains would be comprised of mozilla owned domains or banking sites (since the list is ostensibly for security purposes).
Google is experimenting with blocking adblockers on Youtube. Maybe the thinking is, instead of getting in a rat race with blockers, it’ll just get Firefox to turn them off.
Protecting against losing money or logins, on sites dealing with money or logins, moz’s seal of approval is given as part of mozilla actively reviewing those specific extensions and their changes. AFAIK the permissions system doesn’t have a way to grant permissions to all-except-specific-pages, hence requiring additional changes there.
Raku handles this nicely, I asked if a certain thing should work a certain way, it was near instantly added to the specification, for implementations to align against at a later date.
Separating triage and implementation becomes really clean if you separate specification and implementation.
I dunno, that has problems as well. If the behaviour gets implemented then people will try it out and decide if they actually like it and agree with it (in the betas for the new release.or whatever). Actually having to implement stuff also means the implementer has to think about it more.
Considering that π generally is not used in any “correct” (la pi-does-not-exist) form in most calculations (not even at NASA), I do wonder how many curves would be needed to reach that “NASA approved circle”, 4 is already almost spot on, 8? 12?
If you use four Béziers, it’s off by 2.7e-4;
with eight Béziers, it’s off by 4.2e-6;
16, 6.6e-08;
32, 1.0e-09;
64, 1.6e-11;
128, 2.5e-13;
256, 4.4e-15.
So four isn’t enough to be pixel-perfect, but eight probably is pixel perfect up to a few thousand PPI.
What definition/representation of a circle would you use to test this?
Check that the distance between points on the Bézier curve and the centre of the circle matches the radius of the circle to the required precision.
This is cool! I think peg board is ideal for power supplies and cables. In my last place, I got some from the hardware store (pretty cheap by the sq. ft.), and put it inside in Ikea furniture. It helps when you need to slide things in and out.
The cables and power are the things that make a mess… You might have
And then all the power bricks take up more space than the devices themselves.
I always wanted a desk with two layers, so you could put a solid surface over the peg board, to hide it.
I wouldn’t want to hang it up high in visible place, but that works too.
I feel like the ideal thing for a bunch of relatively low powered items would be a USBC powerboard (240W 8 USBC sockets), but I’ve never seen something like that yet. But after that, having little devices that request the correct power (5V for a pi, 12V for a PC (obviously ditching a full PSU for a mini one), so on and so fourth) that then plug into the end devices, seems very clean and tidy. Costs would balloon however.
Oh yeah I’d be interested in something like that. A couple months ago I searched around and found stuff like this
https://www.amazon.com/Charger-Adapter-Charging-Station-Extension/dp/B0BDLLJBDQ
But I didn’t actually buy it … not sure how well it works. Looks a little flimsy
Maybe something from Anker or Monoprice, but I suspect you’re not going to see more than 5-6 ports due needing to satisfy the wide mix of USB power standards for something the average person would buy.
Maybe I’ll just get a normal power-board and plug a load of adapters into it and then put all of that in a larger box
Actually one thing that would be cool is to just put a window shade over it. In my last place I also had a perfectly fitting translucent window shade to cover a laundry cubby, so you could do the same thing with the home lab on a wall. So you save space, but also don’t have a mess of electronics in view.
I find
ol
to be more of an array-ish thing: Expressing order in relation to other items, if its a key-value thing,dl
fits the problem better.Hi. I’m the creator of Vento. Thanks for sharing my project and the interesting comments.
Regarding to autoescaping: In my case, I like to be able to insert html code everywhere, without changing the templates. For example, let’s say I have the title
The best movies
and want to make “movies” bolder. I like to be able to just writeThe best <strong>movies</strong>
(orThe best **movies**
if I use markdown), without having to change the template. This is something that happens to me a lot, and with nunjucks I had to add thesafe
filter everywhere. But I understand that not everybody has the same needs, so an option to autoescape by default is a good idea. I’d add it to the next version.Let me know if you have more questions. Thanks!
Would be how I’d imagine I’d approach that in a hypothetical vento-with-autoescaping future.
I was thinking of somethign like this:
And the unescape filter:
Sigh. Even if the point of the article - how OL tags are content, not presentation - would be correct, extrapolating to the entire HTML spec being broken, and then going face first into Godwin’s law is a bit much for this reader.
The author was being facetious.
I don’t think Godwin’s law really covers this case. The article doesn’t make an earnest comparison to nazi germany and is self-aware about the statement being hard to swallow. I take the points that HTML is in a unique position to be a democratizing technology, but due to a technicality HTML was not kept as relevant to communicating law content as it was for science, and that could have negative effects on the breadth of the audience that observes and critiques law.
HTML really isn’t that good for communicating science, either, which is why most science journals are still in PDFs. There’s still no HTML element for a footnote!
That’s what they want you to think!
Footnotes exist in CSS Generated Content for Paged Media Module!
Which is supported by Weasyprint! (and almost nothing else!)
(which I found out about from ‘Laying Out a Print Book With CSS’)
(further aside, https://print-css.rocks/ has a list of other tools with support for such things)
also in Paged.js
PrinceXML has partial support: https://www.princexml.com/doc/css-refs/
What are the tradeoffs between using textual HTML templating versus using JSX and converting virtual DOM to string?
Off the top of my head:
HTML text templating:
JSX:
edit: “valid” here means syntactically valid – neither option prevents you from e.g. generating an
img
withoutalt
Flexibility is probably the main thing for me, You only need a template, plus some data, nothing further, CLI tools mean that you don’t even need a host language like JSX does.
Similarly, some template formats are somewhat portable between implementations, nunjucks and jinja2, or the mustache family of templating tools.
Downside is that, you can perish the thought of being able to consider sub-templates as equivalent to client-side components as JSX can be: So while you can render all of these on many server-side platforms: pretty much forget about effortlessly porting such code to run on the client side with lifetimes and swapping things in and out and all that dynamic goodness
There was a big discussion about the JSX structured approach vs. text templating awhile ago - https://lobste.rs/s/j4ajfo/producing_html_using_string_templates
The biggest issue to me is escaping. I would not use Vento, because it apparently does nothing about escaping intentionally. Discarding the alternative Nunjucks:
I’m reading that as the author wants
to produce
<p><tag></p>
not<p><tag></p>
because he doesn’t consider any of his data untrusted. OK, people can choose whatever they want, but it’s not a good design IMO.Kind reminds me of people who store HTML directly in the database, or store HTML-escaped data in the database … a big confusion of logic vs. presentation IMO. Not sure I could live in such a codebase.
I think textual templating with auto-escaping can be comparable to JSX. I think the main advantage would be getting URL escaping, and maybe some consideration for CSS or SVG. I just tried a little snippet and it appears JSX doesn’t know anything about URLs.
I think JSX is generally a pretty good idea, but it’s not free, requiring language support / a compilation step. And maybe it doesn’t go far enough – it’s specific to HTML/XML.
Yeah, I wish Vento had autoescaping at least as an option. Escaping is not just for XSS – sometimes you just have text that wasn’t written for an HTML context.
I like the idea of JSX, but prefer using textual templating in practice – mostly due to my love of leaving out closing tags.
I searched up “sun position in melborune” and followed a link and I got the answer without using any concept of time.
They should though. From the other angle, if I said “I’m awake from 12 to 21, or whatever, then you would know — without conversion — what times I am awake. With timezones, you need to go through and convert two lots of time,
And can’t that be said for any setup?
Applying one set of conveniences to 1 side (the mind to just google the answer), and then applying vastly the opposite to the other side… ugh!
Uncle Steve is a nightwatchman and sleeps all day. Uncle Steve is a garbage man and works early morning and takes afternoons off.
Thats cool and all but I wouldn’t call that a dependency. For it to work, the depended on service needs to be running, doesn’t need to be in a working state, just running. And it needs to be running regardless of any services that depend on it.
systemd
my belovedwill instead, when running a serviceA
that depends onB
, will also startB
, and optionally wait forB
to finish starting and become usable, and constantly check that its not deadlocked or the such (also optional)(and not often implemented).If nothing depends on
B
?B
never runs.Isn’t that generally a minefield to implement? I’ve read so many articles about doing that the wrong way, I don’t think I could trust there is a right way without a lot of auditing of a mechanism that is insanely complex in its nuances. Having something I can lean onto, that I know I can lean onto, is appreciable.
And isn’t that the result? You can do anything through executables, as a result, everyone needs to do everything but everyone does everything a different way, leading to things that sometimes work, sometimes work together, sometimes don’t work together, etc. Eventually people need to remove foot-guns at the root of all things.
What’s a minefield about this? Any user can run a service with nohup or screen. Because of how small each part of daemontools/runit is, the setuidgid executable literally just changes uid and gid before executing the command.
Systemd will manage services running as user and is much more mysterious from an admin perspective as to how it does it. I trust it, but what’s great about daemontools is that you always know exactly what’s happening. You don’t say, “Please figure out how to run this process as a different user.”, you say, “Execute this process with a different uid and gid”
Whether you trust that changing the uid and gid is giving you the isolation you want is up to you, but the daemontools way is to have one tiny program in charge of that so that you can be pretty confident when it is or isn’t happening, and so that the code is very easy to audit.
key to daemontools, as well as qmail, is that for many decades (maybe still?) djb had a personally funded security bounty that i do not think anyone ever cashed in on.
Noting, I didn’t realise at that time that setuidgid was part of daemontools
Hard for me to find now, but I have seen critique of use of su, sudo, runuser, some others, for dropping into a user account to run a service.
AFAIK there is a reason its never been able to be cashed in on.
https://news.ycombinator.com/item?id=23248461
But no! Surely these are the interesting parts of the NFC.
You forced my hand, binary and human-speakable formats added to the article.
Thank you for your service.
Piece is from 2012.
There’s something weird about this post. The markdown in the first para is “raw”, not HTML. Every link is wrapped in a sketchy href.li redirect.
It didn’t look like this when first archived:
https://web.archive.org/web/20150110023042/http://infiniteundo.com/post/25509354022/more-falsehoods-programmers-believe-about-time
href.li is a domain that tumblr uses, don’t know why. So I’m doubting its because of any malicious reason
Thanks for pointing that out. The entire page layout sets off blogspam red flags for me, and I didn’t recognize it as a Tumblr blog.
I went down a bit of a rabbit hole investigating this because I remember it from way back, and I’m afraid I accused the author of copying his own original work…
Good research, thank you! Editors, would it be possible to change both links in the OP to point to the archive.org equivalents?
Would it be possible to use a TKey for secure boot? I don’t know much about that process itself.
Compatibility issue with existing bootloaders would probably prevent you to use anything other than TPM 2.0. One second obstacle is that discrete TPMs are connected to your motherboard through an I2C bus, while the current TKey needs a USB port.
In principle though, they’re both HSMs with similar capabilities (though the TKey is more flexible). I see no obstacle to using the TKey to do secure boot, especially on an embedded device you can write all the firmware from.
(The one thing I haven’t wrapped my head around yet is how secure boot can even work with a discrete HSM: what prevents me from sending one bootloader to the HSM, and then booting with another bootloader anyway? I feel like there’s a missing component that should enforce the submitted and actual bootloader to be one and the same.)
I think you have to trust some of the mobo firmware/hardware to be tamper-resistant. And the security of the system depends on that tamper resistance.
A separate portable HSM like the Tkey could help a bit by forcing an attacker to get the contents of the internal HSM (so they can imitate it), which is harder than bypassing it. But you can do similar with only an internal HSM by requiring key material from the internal HSM to decrypt your drives, so it doesn’t seem like a big deal to me.
But if the attacker completely controls the laptop without you knowing, including the internal HSM, then I don’t think there’s any way a discrete HSM can help.
Yes, this is one thing I didn’t quite know (only suspected) and have been convinced of only the last few days. With a discrete TPM we can always send one bootloader for measurement, and execute another anyway. This opens up the TPM, and if BitLocker/LUKS didn’t require a user password we should be able to decrypt the hard drive despite using hostile/pirate/forensic tools instead of the expected chain of trust.
This means I wrote a mistake in my “Measured Boot” section of my post, I’ll correct it. And ping @Summer as well: sorry, the answer is no, because even though you could use a discrete TKey-like chip to plug into the I2C bus of your motherboard instead of a TPM, discrete chips most likely don’t actually work for secure boot. It only takes one Evil Maid to insert a TPM-Genie or similar between the motherboard and the discrete chip, and it’s game over.
But. If hardware security is not a goal, I think there is a way. If the motherboard can guarantee that it sends the bootloader it will execute over the I2C bus, and nobody “borrows” your laptop to insert a Chip in the Middle, then you can guarantee that the bootloader you execute is the one that is measured. Therefore, no software bootloader-altering attack will survive a reboot.
As another note, if an attacker has physical access to the machine (required to break into the internal HSM unless it is buggy), then they can do lots of other attacks if they instrument the laptop and then return it to you:
I think really good tamper-resistant cases (if such things exist) or rigorous tamper-detection procedures are maybe the more important defence than clever tricks with HSMs for organisations that want to protect against physical attacks on portable hardware.
Not mainstream, but:
Neat. Thanks for sharing. If the hardware is fast enough then I’d hope that this becomes universal to protect against physical attacks and rowhammer-style attacks.
I’m not sure it protects against rowhammer, you can still induce bit flips they’ll just flip bits in cyphertext, which will flip more bits in plaintext. It may make toggling the specific bits in the row that you’re using harder. Typically these things use AES-XTS with either a single key or a per-VM key, so someone trying to do RowHammer attacks will be able to deterministically set or clear bits, it will just be a bit tricky to work out which bits they’re setting.
On the Xbox One, this was patched into the physical memory map. The top bits of the address were used as a key selector. On a system designed to support a 40-bit physical address space with 8-12 GiB of physical memory, there was a lot of space for this. The keys for each key slot were provisioned by Pluton, with a command exposed to the hypervisor to generate a new key in a key slot. Each game ran in a separate VM that had a new key. For extra fun, memory was not zeroed when assigning it to game VMs (and, from the guest kernel to games) because that was slow, so games could see other game’s data, encrypted with one random AES key and decrypted with another (effectively, encrypting with two AES keys).
Sure, you’ll probably still be able to flip bits by hammering rows, but I think gaining information with it will be much harder with encryption. I’m not 100% on any of this, but I think not having knowledge of the bit patterns physically written to disk may make it more difficult to flip bits, too.
Can you specify what was stupid about AMD-SEV? I tried to work with it and remember being disappointed that it didn’t run a fully encrypted VM image out of the box, but you may have something more precise in mind?
SEV was broken even before they shipped hardware. The main stupidity was that they didn’t HMAC the register state when they encrypted it, so a malicious hypervisor could tamper with it, see what happened, and try again. Some folks figured out that you could use that to find the trap handler code in the guest and then build code reuse attacks that let you compromise it. They fixed that specific stupidity with SEV-ES, but that left the rollback ability which still lets you do some bad things. SNP closes most of those holes (though it doesn’t do anti rollback protection for memory, so malicious DIMMs can do some bad things in cooperation with a malicious hypervisor) and is almost a sensible platform. Except that they put data plane things in their RoT (it hashes the encrypted VM contents, rather than doing that on the host core where it’s fast and just signing the hash in the RoT) and their RoT is vulnerable to glitch injection attacks (doubly annoying for MS because we offered them a free license to one that is hardened against these attacks, which they’d already integrated with another SoC, and they decided to use the vulnerable one instead).
I worked with Arm on CCA and it’s what I actually want: it lets you build a privilege-separated hypervisor where the bit that’s in your TCB for confidentiality and integrity is part of your secure boot attestation and the bit that isn’t (which can be much bigger) can be updated independently. It’s a great system design that should be copied for other things.
There’s AMD SEV for this.
Looks I underestimated how difficult tamper resistance actually is. Looks like the problem is actually fundamentally unsolvable: they take my machine, modify it a little bit, and I get an altered machine that spies on my keystrokes, on my screen… so realistically, if I have reason to believe my laptop may have been tampered with, I can no longer trust it, and I need to put it in the trash bin right away… Damn.
At least the “don’t let thieves decrypt the drive” problem is mostly solved.
Cool. I didn’t know about AMD SEV.
I’m sure I’m telling you things you already know, but with security you have to have a threat model or it’s just worrying/paranoia. Probably nobody in the whole world is interested in doing an evil-maid attack on you, and even if they are interested they would probably prefer a similar but less tough target if you took basic steps to protect yourself. If you have a threat model then you can take action. If you assume that your attacker has infinite motivation and resources then there is nothing you can do.
I have two relevant-ish anecdotes about that.
The first dates back years, I was attending a conference on security, mostly aimed at people who are more likely to be targetted than others. Because their work is not only sensitive, but because it is controversial. The presenter considered the Evil Maid attack likely enough that it was irresponsible to leave your laptop unattended at the hotel. That special precautions should be taken if you were ask to hand over your phone before entering a… microphone-free room (do you actually trust whoever took your phone not to try and tamper with it?) So as much as I don’t think those things apply to me right now, it does apply to some people, and it’s nice to give them (and perhaps more importantly, the Snowdens among them) options.
The second one was me working on AMD SEV at the beginning of this very year. So we had this “sensitive” server app that we were supposed to sell to foreign powers. Since we can’t have them peek into our national “secret sauce” (a fairly ordinary sauce actually), we were investigating thwarting reverse engineering efforts. Two options were on the table, code obfuscation and DRM. Seeing the promise of AMD SEV we went the DRM route.
Cool, so AMD SEV transparently encrypts all RAM. But once I got to actually set it up, I realised it wasn’t an all-encompassing solution: AMD-SEV doesn’t run a fully encrypted virtual machine, only the RAM is encrypted. All I/O was left in plaintext, and the VM image was also in cleartext. Crap, we need full disk encryption.
But then we have two problems: where do we put the encryption keys? Can’t have our representative type a password every time our overseas client wants to reboot the machine. So it must be hidden in the TPM. Oh and the TPM should only unlock itself if the right bootloader and OS is running, so we need some kind of Trusted Boot.
At this point I just gave up. I was struggling enough with AMD SEV, I didn’t want to suffer the TPM on top, and most of all I neither believed in their threat model nor liked their business model. I mean, the stuff was a fancy firewall. We make extra effort to hide our code from them, and they’re supposed to trust us to secure their network?
Sure, and I hope you were able to find a job/project more to your tastes after the firewall DRM project.
Noting that both Cython and Nuitka use “compiler” and not “transpiler”.
I would like to see the sources for the lies that are being refuted.
how do people generally set up their keyboards to input those characters? that’s an interesting story - I’ve never seen a writeup on that
I use a bunch of hotstrings in AutoHotKey, I can type ∈ by typing
;in
That’s cool. This approach requires a few keystrokes though - do you find yourself wishing you could chord them?
I originally set a few symbols to chords but find hotstrings both easier to remember and more extensible. At this point I have over 70 hotstrings; remembering 70 chords would make my head explode, much less fitting them all on the keyboard =)
The Julia editor plugins and repl let you enter latex commands for the symbols then press tab to convert them. It works pretty well, though I would prefer something that let me fuzzy search based on longer descriptions of the symbols and showed a preview of what they looked like, maybe I should write that.
If you are using X, then you can put it into .XCompose - e.g.,
(keep in mind that it is eager: that is, you cannot have multiple sequences sharing the same prefix - “in” will mach before “into” or “infty”)
If you use Emacs, then you have really too many ways to approach this. The one I use the most is
abbrev-mode
, with e.g. (they are spread over multiple different tables in myinit.el
):Of course, Emacs has something like this already built-in:
set-input-method
(C-x RET C-\) allows you to chooseTeX
, which lets you use LaTeX-like syntax for many symbols (e.g, \in, \infty, \exists), andinsert-char
(C-x 8 RET) gives you access to all of Unicode.This question made me want to look into it for my own use:
For Gnome, IBus is used, which can read a .XCompose file, which can be formatted like
which gives me access to such a symbol using the Compose key
A chorded approach might be possible, Custom keyboard layout with the relevant symbols as the third option (shift is second, usually right alt is third), or a plugin for ibus might also be appropriate.
Before now I’ve just used copy paste however.
Another useful way is Emacs’ TeX input-method, where you can type
\in
and it will give you ∈.For a pre-created
.XCompose
file with many mathematical symbols etc. already defined, see https://github.com/kragen/xcomposeI use WinCompose (the docs recommend it, but I already had it installed for other reasons). It makes characters very easy to enter - Alt+>> for », Alt+{( for ⊂, etc. I had to put in my own sequence for 「」, which it also makes easy.
emphasis not mine.
… If you want LTS for free, do it yourself.
I don’t understand dictating design decisions without involving any invested parties, e.g. publishers or browser makers
Outside of those mentioned, Servo, Flow, & Ladybird, would greatly benefit such a project.
A less socially taxing direction would be to make a can-I-use of the smaller browsers that exist, so publishers can target such browsers a lot more easily.
I think that the best way to build something like this is to build two versions:
If you use WebAssembly and Canvas, you have most of what I want from a lightweight application delivery system (which seems to be what this is aiming for, rather than the document model goal of some other things in this space). If you build something that runs in the browser then people can build applications targeting it, and if it really is a good design then your native version will be faster or use less memory, so you can encourage people to use it for the sites that support it.
Have you seen handmade.networks’ Orca (I specify because there are only like 50 software projects called Orca)? It’s based on WebAssembly and GLES rather than wasm and Canvas. Basically the idea is wanting to do the same thing as web applications, but without building the UI layer on top of a document rendering platform and 20 years of historical accidents.
It requires checking by the recipient that they received the honest payload, yes?
https://www.rfc-editor.org/rfc/rfc6920 seems like a more accessible, offline capable (la local-network only, no contract system required, can verify the data as soon as its received), option. which gives you something like
nih:sha-256-32;53269057;b
, and is inherently portable, if a server doesn’t have a file, it can proxy through to another server and duplicate it, etc. Clients can switch servers sinceni:
/nih:
is made to work without authorityMy short form was uM73gznF, with the capitals, makes it a bit hard to verbalize, which would be one major use-case for shorter urls. ’Course this is resolvable
There is way more than two
via https://uh.edu/engines/epi3041.htm
It took only, what, 3 people to invent the transistor? Most programming languages and tools were made by tiny amounts of people. The industry isn’t that good, sure, but individuals have managed time and again to make leaps of progress
The author concedes earlier in the piece that developers don’t actually need the features, in the sense of “writing production-quality libraries is impossible without these features”. Rather they “need” the features in the sense of “the programming experience is much nicer with these features than without”.
This is then parlayed somehow into “move fast and break things” and “stability appears to be less of a priority” and other cheap smears.
What’s worse is the lack of commitment to the bit. The author claims to be perfectly willing to wait for a new official compiler release to contain new features, for example (though also claims not to know what compiler version is being used in Go or Java projects, which both contradicts the “willing to wait” claim and is just not super likely to be true – Java is sort of infamously sensitive to what version you’re using, for example). But that opens up, say, Go to exactly the same criticism. Suppose I were to write something like:
This would be about as honest and about as accurate and about as helpful as the author’s ranting about Rust.
So Rust’s not-labeled-stable Nightly channel is comparably stable to Go’s labeled-stable 1.20?
No, but cheap smears against Rust are easy to emulate against other languages. If someone wants to claim that, say, programmers’ desire to use new features is a flaw in Rust, then I’m going to retort that the desire for Go to keep developing new features must also be a flaw. Can’t have the one complaint without allowing the other.
This uncharitable portrayal of the author’s attitude seems baseless to me. Nowhere does the author laugh at Rust (“LOL!”), allege that “getting things right just isn’t a priority for” Rust, or suggest that Rust’s continuing development is caused by bad design (having “messed up so much stuff”).
Rust is my favorite programming language, and I disagree with the submission, yet I sympathize more with the submission than with your comment.
More than the feature itself, it’s the air of mystery around it that baffles me. What are you protecting me against? On which sites? Why does Mozilla’s seal of approval bypass this, rather than giving the user the option to selectively enable those add-ons on specific sites? The permissions system already has a way to grant add-ons permissions to specific pages only, why is this additional layer necessary?
One of the quirks of working in the open and of working in a large organization is that often the code and the communications prep get out of sync. If you decide you’re shipping on a fixed schedule and your communications event falls off-cycle then you have to decide whether to ship something and then explain it or explain something and then ship it.
From the language of “quarantined domains” and browsing through the related bugzilla entries I get a very different sense than the author of this post has. It isn’t “quarantined extensions” and the user will have the ability to override so this really seems like a usability kludge rather than a market power grab. The author of this post shows the extension panel with the warning but doesn’t seem to link to the same page as that “Learn more” link. That page clearly reiterates that these are to improve security for users and the user will have the ultimate decision.
If I had to guess, I would say that these are related to accidental disclosures of sensitive information from sites to extensions, and the confidentiality is to allow the affects sites to finalize their own messaging about the potential security issues for their users. Just a guess.
My guess is that google/youtube said to mozilla: “look, you have to stop with these download extensions because copyright blablablabla or otherwise we lock you out of youtube and you will be irrelevant.” I am not saying that is what happened, but that is how it feels to me.
Where is the youtube thing coming from? Youtube only appeared in the blog post because it was manually added by the author for demonstration purposes. Mozilla isn’t actually blocking any extensions on youtube. Right now the restricted domains list is empty, but if I had to guess I’d think the restricted domains would be comprised of mozilla owned domains or banking sites (since the list is ostensibly for security purposes).
Google is experimenting with blocking adblockers on Youtube. Maybe the thinking is, instead of getting in a rat race with blockers, it’ll just get Firefox to turn them off.
Firefox already has a tiny market share, making it less useful will only decrease that.
Why not simply disable adblocking on Google properties in Chrome?
because then people would flock to firefox
Nah, they’d switch to Edge. It’s Chromium based so it would be more familiar.
yeah something like that
Rather than threatening to lock them out of Youtube, a much more direct (and omnipresent) threat is that they could just cut Mozilla’s funding.
Protecting against losing money or logins, on sites dealing with money or logins, moz’s seal of approval is given as part of mozilla actively reviewing those specific extensions and their changes. AFAIK the permissions system doesn’t have a way to grant permissions to all-except-specific-pages, hence requiring additional changes there.