Almost all of Theo’s communications are straightforward and polite. It’s just that people cherry-picked and publicized the few occasions where he really let loose, so he got an undeserved reputation for being vitriolic.
To be fair, they should also mark as “Not Secure” any page running JavaScript.
Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
(Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)
By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.
[Comment removed by author]
Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.
Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)
Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)
Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)
Lastly, of course, are Terms Of Service, different from the above by at least being above-board.
No.
It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.
With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.
Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.
What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!
WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.
I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)
As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.
MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.
Well… how can I say that… I don’t think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.
If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.
Is this something webmasters should care? I think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.
There is an entire industry around products that do this
There is an entire industry around rasomware. But this does not means it’s a security solution.
It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.
What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.
I wonder if you did read the articles I linked…
The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.
In this context, we need to grant to people accessibility and security.
An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.
I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).
I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.
And that is even without going into which content is safe to be cached in a given environment.
And that is even without going into which content is safe to be cached in a given environment.
Yes, this is the best objection I’ve read so far.
As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.
But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.
HTTPS proxy isn’t incompetence, it’s industry standard.
They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browsers bypass the network configuration to protect the users’ privacy.
(I agree this is stupid, but they are trying to push this anyway)
The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.
It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.
And, doing that in a school or a public library is dangerous and plain stupid.
Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.
Browsers bypass the network configuration to protect the users’ privacy.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.
Yes this is true.
If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Did you know about Firefox’s DoH/CloudFlare affair?
Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.
It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.
TBH, I don’t know what you mean with “security maximalism”.
I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.
Mozilla has a contract with CloudFlare to protect the user privacy
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
AFAIK, even Facebook had a contract with his users.
Yeah.. I know… they will “do no evil”…
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.
AFAIK, even Facebook had a contract with his users
Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
The problem is for those people who are actually useful to the society.
Cloudflare hasn’t done much that makes me believe they will violate my privacy.
The problem with Cloudflare is not what they did, it’s what they could do.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
But my concerns are with Mozilla.
They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated…So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
Let’s assume the first… for now.
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
That’s it.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
Really?
Are you sure everyone understand what is a MitM attack?
Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.
A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.
As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript…
All this is very suspect for a company that claims to care about users’ privacy!
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.
I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.
Are you sure everyone understand what is a MitM attack?
An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.
Are you sure every employee understand their system administrators can see the mail they reads from GMail?
Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.
And it extends the attack surface, both for the users and the company.
And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)
And they ship WebAssembly.
And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.
And you have to edit about:config to disable JavaScript…
Or install a half-way competent script blocker like uMatrix.
All this is very suspect for a company that claims to care about users’ privacy!
I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.
An attack requires an adversary, the evil one.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
It shows very well your understanding of security.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
I have on concerns about WebAssembly.
Not a surprise.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.
It shows very well your understanding of security.
My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.
There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.
Malory sits between Eve and Bob not Bob and Alice.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.
It’s not my duty or problem to debug web applications that I don’t develop.
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
Then don’t do it? Nobody is forcing you.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.
My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.
With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.
The lack of awareness makes MitM caching worse.
If you want secure and rather fast x86, look at Opterons 62xx and 63xx. They are still pretty fast and not vulnerable to many CVE’s. Coupled with Coreboot, they make for a nice desktop or a server.
If you want something faster, more secure and are not limited to x86, POWER9 with Talos II motherboard is a great choice.
It looks like a new single CPU Talos board is still $2500. I mean, that’s far cheaper than they were last time I looked, but still not entirely practical for many enthusiasts.
One biggest issue with other architecture is video deciding. A lot of decoders are written in x86_64 specific assembly. Itanium never had a lot of codecs ported to EPIC, making it useless in the video editing space. There are hardware decoders on a lot of amd/nvidia GPUs, but then it comes down to drivers (amdgpu is open source and you have a better shot there on power, but it’d be interesting to see if anyone has gotten that working).
You can hardware decode but you generally don’t want to hardware encode for editing. HW encoders have worse quality at the same bitrate vs. software.
Mesa support for decode on AMD is good, encode is starting to work but it’s pretty bad right now (compared to windows drivers).
Decoding isn’t the problem. All modern lossy codecs ate strongly biased towards decode performance, and once you’re at reasonable data rates, CPUs handle it fine. Encoding would be misery, because all software encoders are laboriously hand tuned for their target platform, and you really don’t want to use a hardware encoder unless you absolutely have to.
The only reason you’d be stuck with x86 is if you’re running proprietary software and then chip backdoors are the least of your concerns.
The only reason you’d be stuck with x86
When I last saw it debated, everyone agreed x86 stumped all competitors on price/performance, mainly single-threaded. Especially important if you’re doing something CPU-bound that you can’t just throw cores at. One of the reasons is only companies bringing in piles of money can afford a full-custom, multi-GHz, more-work-per-cycle design like Intel, AMD, and IBM. Although Raptor is selling IBM’s, Intel and AMD are still much cheaper.
Actually, POWER9 is MUCH cheaper. You can get 18-core CPU for a way better price and it has 72 threads instead of 36 threads (like Intel).
That sounds pretty high end. Is that true for regular desktop CPU’s? Ex: I built a friend a rig a year or so ago that could do everything up to the best games of the time. It cost around $600. Can I get a gaming or multimedia-class POWER9 box for $600 new?
No, certainly not. But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)
$600 PC will not make it for that long.
“But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)”
The local dealership called me back. They said whoever wrote the comment I showed them should put in an application to the sales department. They might have nice commissions waiting for them if they can keep up that smooth combo of truth and BS. ;)
“$600 PC will not make it for that long.”
Back to being serious, maybe and maybe not. The PC’s that work for about everything now get worse every year. What they get worse at depends on the year, though. The $600-700 rig was expected to get behind on high-end games in a few years, play lots of performance stuff acceptably for a few years more, and do basic stuff fast enough for years more than that. As an example (IIRC), both tedu and I each had a Core Duo 2 laptop for seven or more years with them performing acceptably on about everything we did. I paid $800 for that laptop barely-used on eBay. I’m using a Celeron right now since I’m doing maintenance on that one. It was a cheaper barter, it sucks in a lot of ways, and still gets by. I can’t say I’d have a steady stream of such bargains with long-term usability on POWER9. Maybe we’ll get it after a few years.
One other thing to note is that the Talos stuff is beta based on a review I read where they had issues with some stuff. Maybe the hardware could have similar issues that would require a replacement. That’s before considering hackers focusing on hardware now: I’m just talking vanilla problems. Until their combined HW/SW offering matures, I can’t be sure anything they sell me will last a year much less 10-15.
Even though I’d swap my KGPE-D16 for Talos any minute, I simply can’t afford it. So I’m stuck with x86, but it’s not because of proprietary software.
Don’t forget that performance enhancements, security enhancements, and increased hardware support all add to the size over what was done a long time ago with some UNIX or Linux. There’s cruft and necessary additions that appeared over time. I’m actually curious what a minimalist OS would look like if it had all the necessary or useful stuff. I especially curious if it would still fit on a floppy.
If not security or UNIX, my baseline for projects like this is MenuetOS. The UNIX alternative should try to match up in features, performance, and size.
Can you fit it with a desktop experience on a floppy like MenuetOS or QNX Demo Disc? If not, it’s not as minimal as we’re talking about. I am curious how minimal OpenBSD could get while still usable for various things, though.
Modern PC OS needs ACPI script interpreter, so it can’t be particularly small or simple. ACPI is a monstrosity.
Re: enhancements, I’m thinking Nanix would be more single-purpose, like muLinux, as a desktop OS that rarely (or never) runs untrusted code (incl. JS) and supports only hardware that would be useful for that purpose, just what’s needed for a CLI.
Given that Linux 2.0.36 (as used in muLinux), a very functional UNIX-like kernel, fit with plenty of room to spare on a floppy, I think it would be feasible to write a kernel with no focus on backwards hardware or software compatibility to take up the same amount of space.
Your OS or native apps won’t load files that were on the Internet or hackable systems at some point? Or purely personal use with only outgoing data? Otherwise, it could be hit with some attacks. Many come through things like documents, media files, etc. I can imagine scenarios where that isn’t a concern. What’s your use cases?
To be honest, my use cases are summed up in the following sentence:
it might be a nice learning exercise to get a minimal UNIX-like kernel going and a sliver of a userspace
But you’re right, there could be attacks. I just don’t see something like Nanix being in a place where security is of utmost importance, just a toy hobbyist OS.
It seems to work, just booted the ISO (admittedly not the floppy, don’t have what is needed to make a virtual image right now) of muLinux in Hyper-V and it seems to work fine, even having 0% CPU usage on idle according to Hyper-V.
Whoa, AWS will reboot your VM just because they’re doing maintenance on the host? What year is it?
Learning modern c++ with move only semantics and rvalue references and so on let me understand the problem Rust is trying to solve.
This is a bit disappointing. It feels a bit like we are walking into the situation OpenGL was built to avoid.
To be honest we are already in that situation.
You can’t really use GL on mac, it’s been stuck at D3D10 feature level for years and runs 2-3x slower than the same code under Linux on the same hardware.
It always seemed like a weird decision from Apple to have terrible GL support, like if I was going to write a second render backend I’d probably pick DX over Metal.
I remain convinced that nobody really uses a Mac on macOS for anything serious.
And why pick DX over Metal when you can pick Vulkan over Metal?
Virtually no gaming or VR is done on a mac. I assume the only devs to use Metal would be making video editors.
This is a bit pedantic, but I play a lot of games on mac (mainly indie stuff built in Unity, since the “porting” is relatively easy), and several coworkers are also mac-only (or mac + console).
Granted, none of us are very interested in the AAA stuff, except a couple of games. But there’s definitely a (granted, small) market for this stuff. Luckily stuff like Unity means that even if the game only sells like 1k copies it’ll still be a good amount of money for “provide one extra binary from the engine exporter.”
The biggest issue is that Mac hardware isn’t shipping with anything powerful enough to run most games properly, even when you’re willing to spend a huge amount of money. So games like Hitman got ported but you can only run it on the most expensive MBPs or iMac Pros. Meanwhile you have sub-$1k windows laptops which can run the game (albeit not super well)
I think Vulkan might have not been ready when Metal was first skecthed out – and Apple does not usually like to compromise on technology ;)
My recollection is that Metal appeared first (about June 2014), Mantle shipped shortly after (by a coupe months?), DX12 shows up mid-2015 and then Vulkan shows up in February 2016.
I get a vague impression that Mantle never made tremendous headway (because who wants to rewrite their renderer for a super fast graphics API that only works on the less popular GPU?) and DX12 seems to have made surprisingly little (because targeting an API that doesn’t work on Win7 probably doesn’t seem like a great investment right now, I guess? Current Steam survey shows Win10 at ~56% and Win7+8 at about 40% market share among people playing videogames.)
I’m disappointed that companies who own significant copyright in Linux (like RedHat or Intel) and industry groups like the BSA don’t go after intellectual property thieves like Tesla. There are plenty of non-Linux choices if companies don’t want to comply with the GPL’s license terms. Other car companies seem to be happy with VxWorks and similar.
What’s the point of asking China to comply with American IP if the US won’t even police its own companies?
I’m pretty unsurprised that a company like Intel or Red Hat wouldn’t sue. Lawsuits are expensive, and it’s not clear a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?), just injunctive relief to release the source code to users. So it’d be a pure community-oriented gesture, probably a net loss in monetary terms. And could end up a bigger loss, because with the modern IP regime as de-facto a kind of armed standoff where everyone accumulates defensive portfolios, suing someone is basically firing a first shot that invites them to dig through their own IP to see if they have anything they can countersue you over. So you only do that if you feel you can gain something significant.
SFC is in a pretty different position, as a nonprofit explicitly dedicated to free software. So these kinds of lawsuits advance their mission, and since they aren’t a tech company themselves, there’s not much you can counter-sue them over. Seems like a better fit for GPL enforcement really.
a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?
This is generally why the FSF’s original purpose in enforcing the GPL was always to ensure that the code got published, not to try to shakedown anyone for money. rms told Eben in the beginning, make sure you make compliance the ultimate goal, not monetary damages. The FSF and the Conservancy both follow these principles. Other copyleft holders might not.
Intel owned VxWorks until very recently. Tesla’s copyright violations competed directly with their business.
I’m not a lawyer but the GPL includes the term (emphasis added)
- You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
Even if monetary damages are not available (not sure if they are), it should be possibile to get injunctive relief revoking the right to use the software at all. Not just injunctive relief requiring them to release the source.
This is from GPLv2.
GPLv3 is a bit more lenient:
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Now, I think people should move to GPLv3 if they want this termination clausole.
And in any case, 5 years are completely unrespectful of the various developers that contributed to Tesla through their contribution to the free software they adopted.
To that end, we ask that everyone join us and our coalition in extending Tesla’s time to reach full GPL compliance for Linux and BusyBox, not just for the 30 days provided by following GPLv3’s termination provisions, but for at least another six months.
As a developer, this sounds a lot like changing the license text for the benefit of big corporates without contributors agreement.
When I read these kind of news I feel betrayed by FSF.
I seriously wonder if we need a more serious strong copyleft.
It is not without contributor agreement. Any contributor who does not agree is free to engage in their own compliance or enforcement activity. Conservancy can only take action on behalf of contributors who have explicitly asked them to.
The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.
Conservancy can only take action on behalf of contributors who have explicitly asked them to.
Trust me, it’s not that simple.
The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.
Maybe contributors already agreed to contribute under the license terms and just want it to be enforced as is?
I’m sincerely puzzled by Software Freedom Conservancy.
Philosophycally I like this gentle touch, I’d like to believe that companies will be inspired by their work.
But in practice, to my untrained eye, they weaken the GPL. Because, the message to companies is that Conservancy is afraid to test the GPL in court to defend the developers’ will expressed in the license. As if it was not that safe.
I’m not a lawyer, but as a developer, this scares me a bit.
If contributors want they license enforced they have to do something about that. No one can legally enforce it for them (unless they enter an explicit agreement). There is no magical enforcement body, only us.
Conservancy’s particular strategy wouldn’t be the only one in use if anyone else did enforcement work ;)
They’re asking China to comply with the kind of American IP that makes high margins, not the FOSS. They’re doing it since American companies are paying politicians to act in the companies’ interests, too.
I’m really amazed that Pycon US managed to do this. It’s rare to get a few French talks in Pycon Canada. Is there really that much more Spanish in the US that Pycon could get a whole track in Spanish?
It’s always nice to have Theo to remind us that Linus isn’t as bad of an asshole as people like to portray.
Things I self-host now on the Interwebs (as opposed to at home):
Things I’m setting up on the Interwebs:
Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.
Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.
To be fair, it’s not just systemd, but systemd was the beginning of the end for me.
I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).
I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.
For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.
It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.
Have you looked at Capistrano for deployment? Its workflow for deployment and rollback centers around releasing a branch of a git repo.
I’m interested in what you think of the two strategies and why you’d use one or the other for your setup, if you have an opinion.
I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.
N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.
When I ran my own init system on Arch (systemd was giving me woes) I had to keep libsystemd.so installed for even simple tools like pgrep to work.
Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.
The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
I’m the author of the article.
ancient, outdated kernel all debug flags for the kernel unsupported build of a bootloader
The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.
A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.
complained that the newest versions of software wouldn’t work
I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.
Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.
I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.
He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions
This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.
Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.
He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly
There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.
Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.
Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.
likely with the default sRGB set (which is horribly inaccurate anyway)
1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.
If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.
If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.
I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.
He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
You can do this without making systemd libraries a hard runtime dependency.
I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.
Almost all of these issues are distro issues.
Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.
But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.
e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).
I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.
IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.
Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.
As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.
Exactly, once you start running private registries it’s not the timesaver it may have first appeared as.
Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.
I think Kubernetes has support for some alternative runtimes, including FreeBSD jails? That might make FreeBSD more popular in the long run.
Works fine for me(tm).
It seems fine both over mobile and laptop, and over 4G. I haven’t tried any large groups and I doubt I’ll use it much, but so far I’ve been impressed.
Is bookstack good? I’m on the never ending search for a good wiki system. I keep half writing my own and (thankfully) failing to complete it.
Cowyo is pretty straighforward (if sort of sparse).
Being go and working with flat files, it’s pretty straightforward to run & backup.
Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.
I stopped hosting my own email when I realized that I wasn’t reading my personal email because of the spam. And yeah I tried greylisting and spamassassin and all kinds of shit. At that time I was running my own DNS too (primary & secondary on different continents).
These days I’m only really self-hosting web stuff though I’m pretty sure that’s a bad idea. Nobody offers the web hosting flexibility I want at the price I want to pay, though I think letsencrypt’s ubiquity may start to change that.
The points are good, but I certainly don’t want inotify features to be gating the VFS layer. IMO
inotifyis good at what it does. If you want to know about absolutely everything going on for a given filesystem, maybe you want to implement the filesystem itself (fuse, e.g.).IIRC (and I was involved in higher level filesystem libraries when this stuff was going into the kernel - but that was a long time ago) dnotify and inotify were designed with the constraint that they couldn’t impose a significant performance penalty, the logic being that the fs operations were more important than the change notification. If watching changes is as important or more important than io performance another mechanism like a fuse proxy fs or strace/ptrace makes sense.
fuse is how tup keeps track of dependencies, although I think it also will attempt to use library injection when that’s not availible.
Thing is, FUSE is slower, buggy (I’ve had kernel panics) and less flexible. A native way to track file system operations in a lossless manner would be really nice to have on linux.