I do not trust Software Freedom Conservancy (and with good reasons), but I agree with most of what is written here, except:
Copyright and other legal systems give authors the power to decide what license to choose […]
In my view, it’s a power which you don’t deserve — that allows you to restrict others.
As an author of free software myself, I think that I totally deserve the right to decide who and how can use my work.
I read the article you linked to but didn’t really understand how that means SFC can’t be trusted. Because a project under their umbrella git rebased a repo?
No.
I cannot trust them anymore, because when the project joined Conservancy, I explicitly asked them how my copyright was going to change and Karen Sandler replied that it was not going to change.
One year later I discovered that my name was completely removed by the sources.
According to the GPLv2 this violation causes a definitive termination of the project’s rights to use or modify the software.
Now, I informed Sandler about that mess (before the rebase) and never listen her back after, despite several of my contributions got “accidentally squashed” during the rebase.
That’s why I cannot trust them anymore.
Because they are still supporting a project that purposedly violated the GPLv2 (causing its definitive termination) and despite the fact that I gave them the possibility to fix this, they didn’t… and tried to remove all evidences of this violation and of the license termination with a rebase… (that still squashed some of my commits).
He’s objecting to restricting others in the way that proprietary software does, that’s the right he says you shouldn’t have. I think you edited out the part in your quote what bkuhn was talking about.
But more to your point, I also think that your right to to decide how others can use your work should be very limited. With software, an unlimited number of people can benefit from using your work in ways you may disagree with while you would be the only who would object. As a bargain with society, your authorial rights should be given smaller weight than the rights of your users.
As a bargain with society, your authorial rights should be given smaller weight than the rights of your users.
Is this a principle that you believe should be only applied to software?
Because if not, one could argue that a person’s special skills (say, as a doctor) are so valuable to society that that person should work for free to assure that the greatest number of people have access to their skill.
If the principle is restricted to expression, a photograph I take of a person could be freely used by a political party that I despise to further their cause through propaganda. I am only one person, and they are many. My pretty picture can help them more than it helps me. So according to the principle above (as I read it) they should have unrestricted access to my work.
I believe that the current regime of IP legislation is weighted too much towards copyright holders, but to argue that a creator should have no rights to decide how their work is used is going too far.
Software is different than doctors because software can be reproduced indefinitely without inconveniencing the author. Photographs are more similar to software than doctors.
I also didn’t say an author should have no rights. I just said their rights should weigh less. For example, copyrights should expire after, say, 10 years, instead of lasting forever as they de facto do now.
Thanks for clarifying your position in this matter.
I think we are broadly in agreement, especially with regards to the pernicious effects of “infinite copyright”.
It’s funny that I’m taking the parts of copyright here…
Let’s put it this way: if I invented a clean energy source I would do my best to ensure it was not turned to a weapon.
Same with software.
It’s my work, thus my responsibility.
An interesting read, but I do not think it’s looking at the issues of the Web with a developer hat.
While some of these issues are serious geopolitical threats for most Nations around the world (810 out of 930 DNS roots services are under US control), others are technical issues that should be addressed at software level.
I think it’s time for W3C to release a new version of XHTML, that addresses all this mess once for all: let’s remove JavaScript (and WebAssembly) from the browsers, let’s add easy to remove tags like ADVERTISING, let’s extend form controls, add more HyperText’s controls for video and audio but without the need of JavaScript, and so on..
Basically, the only way to fix the Web is to make it an HyperText medium again.
From the article:
Another issue is whether the customer should install the fix at all. Many computer users don’t allow outside or unprivileged users to run on their CPUs the way a cloud or hosting company does.
I guess the key there is “the way a cloud or hosting company does.” Users typically run browsers, which locally run remotely-fetched arbitrary code as a feature. I would argue that because of browsers, users should especially install the fixes.
The only time when a fix may not be applicable is on single-tenant configurations and when remotely-fetched arbitrary code isn’t run locally.
Users typically run browsers, which locally run remotely-fetched arbitrary code as a feature.
I was going to point this out too but you came first.
However this opens an entirely different vulnerability set, a Pandora box that no one dares to face.
Well, what if a researcher do all these things anyway?
When they publish the results, their licence ends. So what?
Also, no state could allow the installation of such microcode on its hardware exactly because of this clausole.
This license, whether on purpose or accident (see my other comment in this thread for elaboration), is granted to and focuses on OEMs :
- PURPOSE. You seek to obtain, and Intel desires to provide You, under the terms of this Agreement, Software solely for Your efforts to develop and distribute products integrating Intel hardware and Intel software. […]
If you are a systems integrator, there is more than this license agreement binding you and Intel together. If you are not a systems integrator, this license isn’t about you, making the bolded assertion in the article false by being too broad:
Intel has now attempted to gag anyone who would collect information for reporting about those penalties, through a restriction in their license.
Intel made either a mistake or policy change related to their systems integrators. We will all get our benchmarks.
Publishers know what they are doing. Nobody cares about people wanting to block Javascript.
You can disable Javascript of course and at this point the web is still usable without it. However publishers will increasingly turn to protections against JS blockers, you can thank the increasing aggressiveness and popularity of ad blocking extensions for that.
You can disable JavaScript but not with a usable UI, so practically most people cannot.
Also, JavaScript should be disabled by default, enabled on a per site basis and marked as “Not Secure” anyway.
Browsers should make SRI mandatory for JavaScript and they should take special attention to suspect HTTP headers used with scripts.
Interestingly sites like ebay and amazon do work fine without javascript. Not quite as comfortable but no quirks there either. Ebay has gotten worse over the years I admit….
Their is a fairly good compromise. I use uMatrix which blocks 3rd party scripts by default and gives you a ui to enable them as needed. Quite often it doesn’t break anything and when it does it’s usually super easy to work out that a script from a cdn is important but a script from twitter or google analytics is not.
Apparently these days several people are (re)discovering the issues of the Web:
Not to talk about centralization, Cambridge Analytica and other more general geopolitical issues about the Internet.
The problem however seems that either people do not understand these issues or they benefit from them.
I mean, except users.
Apparently the programmer’s fingerprint are somewhat preserved in binary form.
However I’m not sure about its usefulness in court: for example after Harvey claimed to have removed with git rebase most of my commits (to prevent the termination of the GPLv2 after the removal of my copyright statements from a Google’s employee) I’ve found several of my contributions stashed with other commits and they said they had redone the changes without looking at the code and in the exact same way.
To be fair, they should also mark as “Not Secure” any page running JavaScript.
Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
(Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)
By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.
[Comment removed by author]
Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.
Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)
Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)
Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)
Lastly, of course, are Terms Of Service, different from the above by at least being above-board.
No.
It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.
With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.
Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.
What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!
WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.
I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)
As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.
MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.
Well… how can I say that… I don’t think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.
If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.
Is this something webmasters should care? I think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.
There is an entire industry around products that do this
There is an entire industry around rasomware. But this does not means it’s a security solution.
It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.
What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.
I wonder if you did read the articles I linked…
The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.
In this context, we need to grant to people accessibility and security.
An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.
I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).
I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.
And that is even without going into which content is safe to be cached in a given environment.
And that is even without going into which content is safe to be cached in a given environment.
Yes, this is the best objection I’ve read so far.
As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.
But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.
HTTPS proxy isn’t incompetence, it’s industry standard.
They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browsers bypass the network configuration to protect the users’ privacy.
(I agree this is stupid, but they are trying to push this anyway)
The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.
It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.
And, doing that in a school or a public library is dangerous and plain stupid.
Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.
Browsers bypass the network configuration to protect the users’ privacy.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.
Yes this is true.
If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Did you know about Firefox’s DoH/CloudFlare affair?
Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.
It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.
TBH, I don’t know what you mean with “security maximalism”.
I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.
Mozilla has a contract with CloudFlare to protect the user privacy
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
AFAIK, even Facebook had a contract with his users.
Yeah.. I know… they will “do no evil”…
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.
AFAIK, even Facebook had a contract with his users
Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
The problem is for those people who are actually useful to the society.
Cloudflare hasn’t done much that makes me believe they will violate my privacy.
The problem with Cloudflare is not what they did, it’s what they could do.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
But my concerns are with Mozilla.
They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated…So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
Let’s assume the first… for now.
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
That’s it.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
Really?
Are you sure everyone understand what is a MitM attack?
Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.
A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.
As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript…
All this is very suspect for a company that claims to care about users’ privacy!
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.
I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.
Are you sure everyone understand what is a MitM attack?
An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.
Are you sure every employee understand their system administrators can see the mail they reads from GMail?
Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.
And it extends the attack surface, both for the users and the company.
And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)
And they ship WebAssembly.
And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.
And you have to edit about:config to disable JavaScript…
Or install a half-way competent script blocker like uMatrix.
All this is very suspect for a company that claims to care about users’ privacy!
I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.
An attack requires an adversary, the evil one.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
It shows very well your understanding of security.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
I have on concerns about WebAssembly.
Not a surprise.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.
It shows very well your understanding of security.
My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.
There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.
Malory sits between Eve and Bob not Bob and Alice.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.
It’s not my duty or problem to debug web applications that I don’t develop.
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
Then don’t do it? Nobody is forcing you.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.
My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.
With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.
The lack of awareness makes MitM caching worse.
I can’t decide if Let’s Encrypt is a godsend or a threat.
On one hand, it let you support HTTPS for free.
On the other, they collect an enourmous power worldwide.
Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.
It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.
Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.
OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.
Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.
I’ve created lots of CAs, trusted by at most 250 people. :)
Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.
Obviously I was not talking about the technical stuffs.
One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.
After a few time (and a lot of money spent in legal consulting) he gave up.
He said: “it’s easier to open a bank”.
In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.
But, it’s definitely not a technical problem.
Luckily they are a non-profit
Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.
It sound a lot like another attempt to centralize the Internet, always around the same center.
It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.
And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.
Probably, issuing TLS certificates should be a public service free for each citizen of a state.
Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.
Proof, so we aren’t just taking @Shamar’s word for it:
Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/
Section 2.1 states the 501(c)(6) designation with the IRS.
My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.
So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.
It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).
So your initial post is correct and Shamar is mistaken.
The Linux Foundation will provide general and administrative support services, as well as services related to fundraising, financial management, contract and vendor management, and human resources.
This is from the page linked by @philpennock.
I wonder what is left to do for the Let’s Encrypt staff! :-)
I’m amused by how easily people forget that organisations are composed by people.
What if Linux Foundation decides to drop its support?
No funds. No finance. No contracts. No human resources.
Oh and no hosting, too.
But hey! I’m mistaken! ;-)
Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.
I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.
Agreed.
Feel free to think that it’s a philanthropic endeavour!
I will continue to think it’s a political one.
The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.
Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.
Good point! I stand corrected. :-)
Still note how it’s easier for the certificate issuer to go unnoticed.
What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.
They’re a 501(c)(3).
LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.
Look at the home page, top-right.
The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG:
Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG).
That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524
I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.
I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.
I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.
That is a very reductionist view of what people use the web for.
I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.
YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.
I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.
Minimalist Slack has been around for decades, it’s called IRC.
It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.
Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.
The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.
In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.
Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.
Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.
The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.
The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly
Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?
They are, in fact, downloading an application and running it locally.
That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.
As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.
A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.
The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.
JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language
(a == b) !== (a === b)
but only some times…
Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.
(And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)
Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).
Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.
The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.
I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.
There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.
If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).
All of these have well-maintained bindings to all popular scripting languages.
QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.
The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.
Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).
Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.
I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.
Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.
(But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)
I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.
I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.
I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).
The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.
I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.
TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.
When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.
It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.
It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.
Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.
If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.
Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.
Google docs would be a desktop program.
Slack would be IRC.
What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.
Google docs would be a desktop program.
This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.
Nice article. Also interesting for reasoning about DNS-over-HTTPS.
As far as I can say from my experience in Kenya, it should also be noted that Africans have a very different way of perceiving time. And security. And… everything! :-D
How would I address this issue?
I think that I would basically create a reverse proxy serving over HTTP those sites that could benefit more from caching (eg Wikipedia). Probably with a custom domain such as wikipedia.cached.local so that people could not be fooled to take the proxied for the original. Rewriting URIs for hypertexts shouldn’t be an issue, but it could be harder to Ajax pages. Probably I would also create a control page so that a page could be prefetched or updated. With a custom protocol and a server in Europe, one could also prefetch several contents at once and send them back together, maximizing bandwidth usage.
Obviously it wouldn’t be safe, but it would be visibly unsafe, and limited to those website that can get advantage of such caches without creating serious threats.
As for service workers, I do not think they would improve the user experience at all, since they are local to the browser and the browser has a cache anyway. The problem is to share such cache between different machines.
Local reverse proxy is a clever idea, and a proxy that you explicitly set up clients trust a la corporate middleboxes (see Lanny’s comment) seems like it can work in some environments too. Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.
The point of the machinery I described was to make user explicitly choose between security and access time.
You can make everything smoother (and easier to implement) with a local CA or by installing proper fake certificates in the clients and transparent proxy, but then people cannot easily opt-out.
Worse: they might be trusting the wrong people without any benefit, as for sensible pages that cannot be cached (shopping carts, online banking and similar…)
That why using the reverse proxy should be opt-in, not default and trivial to opt out: there’s no need for a proxy if you want to edit a wikipedia page!
Sympathetic to the problem of existing solutions no longer working, sort of surprised the original blog post wasn’t more about how to improve things now.
Eric Meyer is a legend of HTML, CSS and Web accessibility. A legend, beyond any doubt.
Before HTML5 I used to read his website daily. He teached me a lot.
But he is a client-side guy.
I think his reference to service workers is an attempt to improve things now.
About analytics: You can do them on the server side by parsing your web logs! That used to be how everyone did it! Google Analytics popularized client side analytics using JavaScript around 2006 or so.
Unfortunately I feel like a lot of the open source web analytics packages have atrophied from disuse. But I wrote some Python and R scripts to parse access.log and it works pretty well for my purposes.
http://www.oilshell.org/ is basically what this article recommends, although I’m using both client-side and server-side analytics. I can probably get rid of the client-side stuff.
related: http://bettermotherfuckingwebsite.com/ (I am a fan of narrow columns for readability)
I agree, I used to use JAWStats a PHP web app that parsed and displayed the AWStats generated data files to provide visually appealing statistics a lot like Google analytics but entirely server side with data originating from apache/nginx log files.
It’s a shame that it was last worked on in 2009. There was a fork called MAWStats but that hasn’t been updated in four years either :(
For a while I self hosted my feed reader and web analytics via paid for apps, Mint and Fever by Shaun Inman but those where abandoned in 2006. It seems like all good software ends up dead sooner or later.
Maybe the GDPR will give these project a new breath.
They are much better for privacy aware people.
It’s been on my list of projects to attempt for a while, but my static site generator Tapestry takes up most of my spare time.
I currently use GoAccess myself, the only thing that would make the HTML reports better is seeing a calendar with visit counters against days.
Why re-create code editors, simulators, spreadsheets, and more in the browser when we already have native programs much better suited to these tasks?
Because the Web is the non-proprietary application platform that actually has traction.
But it’s too dangerous.
Loved the article. I suggest to add the osdev tag.
A few points you might like to reflect upon:
On a completely alternative history, Pike’s ACME for Plan 9 can be considered as a simple hypertext manager that you might find interesting (it’s more than an hypertext manager… but with the plumber it’s also an easy to use hypertext system).
Didn’t systemd hard code 8.8.8.8 as well at some point?
It’s such a good thing that people are watching out for violations in free software.
They use it as the default for the fallback if no DNS is configured. https://github.com/systemd/systemd/blob/master/meson_options.txt#L200
That depends on your individual situation. Some users might appreciate that the system ‘just works’ even if not configured properly. Other wouldn’t, for 2 reasons:
You cannot imagine how many people mark comments they do not like as incorrect without even checking the sources, commenting or noticing that they are opinions!
You shouldn’t care much: other might learn something from your comment anyway. At least an incorrect downvote make you double check the sources!
Let’s stop here for the moment and repeat: With Mozilla’s change, any (US) government agency can basically trace you down.
Apparently, the issue is not only US government agencies, but I’m starting to think that the introduction of a single point of failure is intentional.
The comments in the post discuss a 5XX vs a 4XX error and that client-side errors should be fixed by the client. Now I am wondering if the GDPR applies to European citizens or people that are currently in Europe (maybe a day trip or what ever). I usually thought that these GDPR filters are using geoIp. But what if a European citizen is in the US and the other way around? I only checked Wikipedia for this and they say the GDPR applies to EU-citizens. So how to figure out if a web client is a EU-citizen? What am I doing wrong?
The companies are just trying to protect themselves as best they can. Realistically, a European citizen suing a US-only company in a European court over European law is being frivolous and the company will likely not be affected in any way, so the butt-covering of geoip blocking is more a political statement to potential sue-ers than it is actual legal protection.
What is the actual message to European users of such political statement?
We don’t want your money? We don’t want your data? You do not deserve our technology? We are the Spiders of the Web and you are just a fly?
Btw, as an European I would really appreciate a clear statement on a website saying “we are sorry but we cannot protect your data and respect your rights, please search for one of our competitor that can do it better”.
I’m not ironic.
GDPR defines several important rights for the data subject that imply certain investments in cybersecurity and a basic quality of service. Being able to say “I cannot do this right, please ask to someone else” is a sign of professionalism.
You figure it out by asking them. There are many sites that don’t serve US citizens for various reasons. When you enter them, they ask you to declare you are not a US citizen. It’s as simple as that. If they lie, it’s on them.
Honestly, this GDPR thing has gotten many Americans acting indignated and generally quite irrational over something that hardly changes anything and is not without a slew of precedent. It’s just the first time US companies are visibly seriously affected by law elsewhere. Now you know how it feels. Get over the feeling and deal with it.
Well, in principle, I would guess that European courts might be apprehensive about dictating law globally, which would essentially be the case if it was found that GDPR applies to European citizens wherever they may be, and even if a website operator had taken all reasonable precautions to block European citizens from using their cite.
GDPR apply to data of European citizens worldwide and to data of non European citizens collected while they are in the Union.
However, if your registration form have a mandatory checkbox “I’m NOT a European citizen and I’m not going to use your services while in the European Union” AND the such checkbox is uncheked by default AND you block all European IPs, I think no European court will ever annoy you.
This is not a news, it’s a raw source.
Until it’s proven, we shouldn’t consider these statements neither as news nor as fakes.
But IMHO, it’s pretty good material to verify for hackers.
Maybe someone on any of these threads has a Tesla, we have some pentesters on Lobsters, and maybe they let them see if a SSH response happens. That by itself would substantiate that claim with near-zero risk of damage. Well, there might some stuff to probe and crack to get to that part depending on implementation. And hacking a Tesla might void some warranty. ;)
EDIT: The thread friendlysock linked to had this quote that indicates it should be easy if source is a knowledgeable insider:
“99% of what i’m talking about is “public” anyway. tesla isn’t encrypting their firmware and it’s really easy to glean information from the vpn with a packet cap because nothing inside the vpn (was) encrypted. dumping tegra 3 model s and x is trivial and tesla’s cars are nowhere near as secure as they’d have you believe.”