Maybe some folk don’t understand what’s going on here, but this is in direction violation of Postel’s law:
They’re blocking access from old devices for absolutely no technical reason; they’re blocking read-only access from folks that might not have any other devices at their disposal.
If you have an old iPod lying around, why on earth should you not be able to read Wikipedia on it? Absolutely no valid technical reason to deny access. Zilch. None. Nada.
There’s no reason it shouldn’t be possible to read Wikipedia over straight HTTP, for that matter.
I know next to nothing about security so correct me if I’m wrong, but doesn’t leaving old protocols enabled make users vulnerable to downgrade attacks?
You’re applying bank-level security to something that’s public information and should be accessible to everyone without a licence or access control in the first place. I don’t even know what sort of comparison to make here best, because in my view requiring HTTPS in the first place here was a misguided decision that’s based on politics, corporate interests and fear, not on rational facts. Postel’s law is also a well-known course of action in telecommunication, even Google still follows it — www.google.com still works just fine over straight HTTP, as does Bing, no TLS mandated from those who don’t want it.
I agree with you, I’d like to be able to access Wikipedia with HTTP, but this is in my opinion a different issue from disabling old encryption protocols.
Accessing Wikipedia with secure and up to date protocols might not be necessary to you but it might be for people who live under totalitarian regimes. One could argue that said regimes have better ways to snoop on their victims (DNS tracking, replacing all certificates with one they own…) but I still believe that if enforcing the use of recent TLS versions can save even a single life, this is a measure worth taking. It would be interesting to know if Wikipedia has data on how much it is used by people living in dictatorships and how much dropping old TLS versions would help these people.
totalitarian regimes
It’s funny you mention it, because this actually would not be a problem under a totalitarian regime with a masquerading proxy and a block return
policy for the https port and/or their own certificates and a certificate authority. See https://www.xkcd.com/538/.
Also, are you suggesting that Wikipedia is basically blocking my access for my own good, even though it’s highly disruptive to me, and goes against my own self-interests? Yet they tell me it is in my own interest that my access is blocked? Isn’t that exactly what a totalitarian regime would do? Do you not find any sort of an irony in this situation?
“Isn’t that exactly what a totalitarian regime would do?”
I think you may have overstated your case here.
this actually would not be a problem under a totalitarian regime with a masquerading proxy and a block return policy for the https port and/or their own certificates and a certificate authority.
Yes, this is what I meant when I wrote “One could argue that said regimes have better ways to snoop on their victims”.
Also, are you suggesting that Wikipedia is basically blocking my access for my own good
No, here’s what I’m suggesting: there are Wikipedia users who live in countries where they could be thrown in jail/executed because of pages they read on Wikipedia. These users are not necessarily technical, do not know what a downgrade attack is and this could cost them their lives. Wikipedia admins feel they have a moral obligation to do everything they can to protect their lives, including preventing them from accessing Wikipedia if necessary. This is a price they are willing to pay even if it means making Wikipedia less convenient/impossible to use for other users.
If they left http, yeah, sure. But I don’t think any attack that downgrades ssl encryption method exists, both parties always connect using the best they have. If there exists one, please let me know.
There is no technical reason I’m aware of. Why does wikipedia do this? It’s not like I need strong encryption to begin with, I just want to read something on the internet.
I still have usable, working smartphone with android Gingerbread, it’s the first smartphone I ever used. It’s still working flawlessly and I’m using it sometimes when I want to quickly find something when my current phone has no battery and I don’t want to turn on my computer.
This move will for no reason kill my perfectly working smartphone.
But I don’t think any attack that downgrades ssl encryption method exists,
Downgrade attacks are possible with older versions of SSL e.g. https://www.ssl.com/article/deprecating-early-tls/
It’s not like I need strong encryption to begin with, I just want to read something on the internet.
Which exact page you’re looking at may be of interest, e.g. if you’re reading up on medical stuff.
Which exact page you’re looking at may be of interest, e.g. if you’re reading up on medical stuff.
Are you suggesting that we implement access control in public libraries, so that noone can browse or checkout any books without strict supervision, approvals and logging by some central authority? (Kinda like 1984?)
Actually, are you suggesting that people do medical research and trust information from Wikipedia, literally edited by anonymous people on the internet?! HowDareYou.gif. Arguably, this is the most misguided security initiative in existence if thought of in this way; per my records, my original accounts on Wikipedia were created before they even had support for any TLS at all; which is not to say it’s not needed at all, just that it shouldn’t be a mandatory requirement, especially for read-only access.
P.S. BTW, Jimmy_Wales just responded to my concerns — https://twitter.com/jimmy_wales/status/1211961181260394496.
Are you suggesting that we implement access control in public libraries, so that noone can browse or checkout any books without strict supervision, approvals and logging by some central authority? (Kinda like 1984?)
I’m saying that you may not wish other people to infer what medical conditions you may have based on your Wikipedia usage. So TLS as the default is desirable here, but whether it should be mandatory is another question.
Are you suggesting that we implement access control in public libraries, so that noone can browse or checkout any books without strict supervision, approvals and logging by some central authority? (Kinda like 1984?)
PSST, public libraries in the western world already do this to some extent. Some countries are more central than others thanks to the US PATRIOT Act.
public libraries in the western world
Not my experience at all; some private-university-run libraries do require ID for entry; but most city-, county- and state-run libraries still allow free entry without having to identify yourself in any way. This sometimes even extends to making study-room reservations (can often be made under any name) and anonymous computer use, too.
I still have usable, working smartphone with android Gingerbread, it’s the first smartphone I ever used. It’s still working flawlessly and I’m using it sometimes when I want to quickly find something when my current phone has no battery and I don’t want to turn on my computer.
This move will for no reason kill my perfectly working smartphone.
It’s not working flawlessly, the old crypto protocols and algorithms it uses have been recalled like a Takata airbag, and you’re holding on because it hasn’t blown up in your face yet.
This move will for no reason kill my perfectly working smartphone.
(my emphasis)
So you just use this phone to access Wikipedia, and use it for nothing else?
If so, that’s unfortunate, but your ire should be directed to the smartphone OS vendor for not providing needed updates to encryption protocols.
our ire should be directed to the smartphone OS vendor for not providing needed updates to encryption protocols
I think it’s pretty clear that the user does not need encryption in this use-case, so, I don’t see any reason to complain to the OS vendor about encryption when you don’t want to be using any encryption in the first place. Like, seriously, what sort of arguments are these? Maybe it’s time to let go of the politics in tech, and provide technical solutions to technical problems?
As per my comment, I do believe that the authentication provisions of TLS are applicable to Wikipedia.
Besides, the absolute outrage if WP had not offered HTTPS would be way bigger than now.
I find the connection to Postel’s law only weak here, but in any case: This is the worst argument you could make.
It’s pretty much consensus among security professionals these days that Postel’s law is a really bad idea: https://tools.ietf.org/html/draft-iab-protocol-maintenance-04
I don’t think what passes for “postel’s law” is what Postel meant, anyway.
AFAICT, Postel wasn’t thinking about violations at all, he was thinking about border conditions etc. He was the RFC editor, he didn’t want anyone to ignore the RFCs, he wanted them to be simple and easy to read. So he wrote “where the maximum line length is 65” and meant 65. He omitted “plus CRLF” or “including CRLF” because too many dotted i’s makes the prose heavy, so you ought to be liberal in what you accept and conservative in what you generate. But when he wrote 65, he didn’t intend the readers to inter “accept lines as long as RAM will allow”.
https://rant.gulbrandsen.priv.no/postel-principle is the same argument, perhaps better put.
IMO this is another case of someone wise saying something wise, being misunderstood, and the misunderstanding being a great deal less wise.
I can’t really understand advocating laws around protocols except for “the protocol is the law”. Maybe you had to be there at the time.
As I understand it, they’re protecting one set of users from a class of attack by disabling support for some crypto methods. That seems very far from “absolutely no technical reason”.
As for HTTP, if that were available, countries like Turkey would be able to block Wikipedia on a per-particle basis, and/or surveil its citizens on a per-article basis. With HTTPS-only, such countries have to open/close Wikipedia in toto, and cannot surveil page-level details. Is that “no reason”?
As for HTTP, if that were available, countries like Turkey would be able to block Wikipedia on a per-particle basis, and/or surveil its citizens on a per-article basis. With HTTPS-only, such countries have to open/close Wikipedia in toto, and cannot surveil page-level details. Is that “no reason”?
I don’t understand why people think this is an acceptable argument for blocking HTTP. It reminds me of that jealous spouse scenario where someone promises to inflict harm, either to themselves or to their partner, should the partner decide to leave the relationship. “I’ll do harm if you censor me!”
So, Turkey wants to block Wikipedia on a per-article business? That’s their decision, and they’ll go about it one way or another, I’m sure the politicians they don’t particularly care about the tech involved anyways (and again, it’s trivial for any determined entity to block port 443, and do a masquerade proxy on port 80, and if this is done on all internet connections within the country, it’ll work rather flawlessly, and noone would know any better). So, it’s basically hardly a deterrent for Turkey anyways. Why are you waging your regime-change wars on my behalf?
Well, Wikipedia is a political project, in much the same way that Stack Overflow is. The people who write have opinions on whether their writings should be available to people who want to read.
You may not care particularly whether all of or just some of the information on either Wikipedia or SO are available to all Turks, but the people who wrote that care more, of course. They wouldn’t spend time writing if they didn’t care, right? To these people, wanting to suppress information about the Turkish genocide of 1915 is an affront.
So moving to HTTPS makes sense to them. That way, the Turkish government has to choose between
The Wikipedians are betting that the second option is unpopular with the Turks.
It’s inconvenient for old ipad users, but if you ask the people who spend time writing, I’m sure they’ll say that being able to read about your country’s genocide at all is vastly more important than being able to read using old ipads.
I can think of several reasons:
So what’s to stop a totalitarian regime from doing the following?
The difficulty is to setup/enrole TotallyLegitCA. How do you do that? If TotallyLegitCA is public, the transparency log will quickly reveal what they are doing. The only way to pull that seems to force people to have your CA installed, like Kazakhstan is doing.
We’re talking about a totalitarian regime (or you know, your standard corporation who install their own CA in the browser).
That’s actually incorrect. There are various technical reasons. But also remember that they need to operate on a vast scale as a non-profit. This is hard.
Here are some technical reasons. I’m sure others will chime in as there are likely many more.
providing a read-only version without login over HTTP shouldn’t really add any new code except they’d be on a HTTP-2-only webserver if I’m not mistaken.
There are arguments for an inverse-postel’s law given in https://m.youtube.com/watch?v=_mE_JmwFi1Y
But I hear all the time that I must ensure my personal site uses HTTPS and that soon browsers will refuse to connect to “insecure” sites. Isn’t this a good thing Wikipedia is doing? /s
Edit also see this discussion: https://lobste.rs/s/xltmol/this_page_is_designed_last#c_keojc6
I have HTTPS on my completely static website mostly so that no one asks why I don’t have HTTPS, but on the other hand, the “completely static” part is only relevant as long as there are only Eves in the middle and no Mallories.
If serving everything over HTTPS will make the life of ISPs injecting ads and similar entities harder, it’s a good thing, until there’s a legal rather than technical solution to that.
I actually think that HTTPS is reasonable for Wikipedia, if for nothing else to hinder 3rd parties for capture your embarrassing edits to “MLP: FIM erotica” and tracing it to back to you. For a static, read-only site it just adds cost and/or a potential point of failure.
For a static, read-only site it just adds cost and/or a potential point of failure.
dmbaturin just said what the value add is. HTTPS prevents third parties from modifying the content of your static site.
This site is claiming to offer a “standard for opting out of telemetry”, but that is something we we already have: Unless I actively opt into telemetry, I have opted out. If I run your software and it reports on my behavior to you without my explicit consent, your software is spyware.
but that is something we we already have: Unless I actively opt into telemetry, I have opted out.
I know this comes up a lot, but I disagree with that stance. The vast majority of people leaves things on their defaults. The quality of information you get from opt-in telemetry is so much worse than from telemetry by default that it’s almost not worth it.
The only way I could see “opt-in” telemetry actually work is caching values locally for a while and then be so obnoxiously annoying about “voluntarily” sending the data that people will do it just to shut the program up about it.
That comment acts like you deserve to have the data somehow? Why should you get telemetry data from all the people that don’t care about actively giving it to you?
That comment acts like you deserve to have the data somehow?
I’ve got idiosyncratic views on what “deserving” is supposed to mean, but I’ll refrain from going into philosophy here.
Why should you get telemetry data from all the people that don’t care about actively giving it to you?
Because the data is better and more accurate. Better and more accurate data can be used to improve the program—which is something everyone will eventually benefit from. But if you skew the data towards the kinds of people who opt into telemetry.
Without any telemetry, you’ll instead either (a) get the developers’ gut instinct (which may fail to reflect real-world usage), or (b) the minority that opens bug tickets dictate the UI improvements instead, possibly mixed with (a). Just as hardly anyone (in the large scale of things) bothers with opting into telemetry, hardly anyone bothers opening bug tickets. Neither group may be representative of the silent majority that just wants to get things done.
Consider the following example for illustration of what I mean (it is a deliberate oversimplification, debate my points above, not the illustration):
Assume you have a command-line program that has 500 users. Assume you have telemetry. You see that a significant percentage of invocations involve the subcommand check
, but no such command exists; most such invocations are immediately followed by the correct info
command. Therefore, you decide to add an alias. Curiously, nobody has told you about this yet. However, once the alias is there, everyone is happier and more productive.
Had you not had telemetry, you would not have found out (or at least not found out as quickly, only when someone got disgruntled enough to open an issue). The “quirk” in the interface may have scared off potential users to alternatives, not actually giving your program a fair shot because of it.
Bob really wants a new feature in a software he uses. Bob suggests it to developers, but they don’t care. As far as they can tell, Bob is the only one wanting it. Bob analyzes the telemetry-related communication and writes a simple script that imitates it.
Developers are concerned about privacy of their users and don’t store IP addresses (it’s less than useless to hash it), only making it easier for Bob to trick them. What appears as a slow growth of active users, and a common need for a certain feature, is really just Bob’s little fraud.
It’s possible to make this harder, but it takes effort. It takes extra effort to respect users’ privacy. Is developing a system to spy on the users really more worthy than developing the product itself?
You also (sort of) argued that opt-in telemetry is biased. That’s not exactly right, because telemetry is always biased. There are users with no Internet access, or at least an irregular one. And no, we don’t have to be talking about developing countries here. How do you know majority of your users aren’t medical professionals or lawyers whose computers are not connected to the Internet for security reasons? I suspect it might be more common than we think. Then on the other hand, there are users with multiple devices. What can appear as n different users can really just be one.
It sort of depends on you general philosophical view. You don’t have to develop a software for free, and if you do, it’s up to you to decide the terms and conditions and the level of participation you expect from your users. But if we talk about a free software, I think that telemetry, if any, should be completely voluntary on a per-request basis, with a detailed listing of all information that’s to be sent in both human- and machine- readable form (maybe compared to average), and either smart enough to prevent fraudulent behavior, or treated with a strong caution, because it may as well be just an utter garbage. Statistically speaking, it’s probably the case anyway.
I’m well aware that standing behind a big project, such as Firefox, is a huge responsibility and it would be really silly to advice developers to rather trust their guts instead of trying to collect at least some data. That’s why I also suggested how I imagine a decent telemetry. I believe users would be more than willing to participate if they saw, for example, that they used a certain feature above-average number of times, and that their vote could stop it from being removed. It’s also possible to secure per-request telemetry with a captcha (or something like that) to make it slightly more robust. If this came up once in a few months, “hey, dear users, we want to ask”, hardly anyone would complain. That’s how some software does it, after all.
The fraud thing is an interesting theory, but I am unaware how likely it is; you’ve theorised a Bob who can generate fraudulent analytics but couldn’t fake an IP address or use multiple real IP addresses or implement the feature he actually wants.
It’s not that he couldn’t do it, it’s just much simpler without that. It’s really about the cost. It’s easy to curl
, it’s more time consuming or expensive to use proxies, and even more so to solve captchas (or any other puzzles). The lower the cost, the higher the potential inaccuracy. And similarly, with higher cost, even legitimate users might be less willing to participate.
I don’t have some universal solution or anything. It’s just something to consider. Sometimes it might be reasonable to put effort into making a robust telemetric system, sometimes none at all would be preferred. I’m trying to think of a case “in between”, but don’t see a single situation where jokingly-easy-to-fake results could be any good.
Telemetry benefits companies, otherwise companies wouldn’t use it. Perhaps it can benefit users, if the product is improved as a result of telemetry. But it also harms users by compromising their privacy.
The question is whether the benefits to users outweigh the costs.
Opt-out telemetry-using companies obviously aren’t concerned about the costs to users, compared to the benefits they (the companies) glean from telemetry-by-default. They are placing their own interests first, ahead of their users. That’s why they resort to dark patterns like opt-out.
You assume that we actually need telemetry to develop good software. I’m not so sure. We developed good software for decades without telemetry; why do we need it now?
When I hear the word “telemetry”, I’m reminded of an article by Joel Spolsky where he compared Sun’s attempts at developing a GUI toolkit for Java (as of 2002) to Star Trek aliens watching humans through a telescope. The article is long-winded, but search for “telescope” to find the relevant passage. It’s no coincidence that telemetry and telescope share the same prefix. With telemetry, we’re measuring our users’ behavior from a distance. There’s not a lot of signal there, and probably a lot of noise.
It helps if we can develop UsWare, not ThemWare. And I think this is why it’s important for software development teams to be diverse in every way. If our teams have people from diverse backgrounds, with diverse abilities and perspectives, then we don’t need telemetry to understand the mysterious behaviors of those mysterious people out there.
(Disclaimer: I work at Microsoft on the Windows team, and we do collect telemetry on a de-facto opt-out basis, but I’m posting my own opinion here.)
we don’t need telemetry to understand the mysterious behaviors of those mysterious people out there
Telemetry usually is not about people’s behaviors, it’s about the mysterious environments the software runs in, the weird configurations and hardware combinations and outdated machines and so on.
Behavioral data should not be called telemetry.
One concrete benefit of telemetry: “How many people are using this deprecated feature? Should we delete it in this version or leave it in a while longer?”
We developed good software for decades without telemetry; why do we need it now?
Decades-old software is carrying decades-old cruft that we could probably delete, but we just don’t know for sure. And we all pay the complexity costs one paper cut at a time.
I’m as opposed to surveillance as anybody else in this forum. But there’s a steelman question here.
The quality of information you get from opt-in telemetry is so much worse than from telemetry by default that it’s almost not worth it.
A social scientist could likewise say: “The quality of information you get from observing humans in a lab is so much worse than when you plant video cameras in their home without them knowing.”
How is this an argument that it’s ok?
There are three differences as far as I can tell:
The data from a hidden camera is not anonymizable. Telemetry, if done correctly (anonymization of data as much as possible, no persistent identifiers, transparency as to what data is and has been sent in the past), cannot be linked to a natural person or an indvidual handle. Therefore, I see no harm to the individual caused by telemetry implemented in accordance with best data protection practices.
Furthermore, the data from the hidden camera cannot cause corrective action. The scientist can publish a paper, maybe it’ll even have revolutionary insight, but can take no direct action. The net benefit is therefore slower to be achieved and very commonly much less than the immediate, corrective action that a software developer can take for their own software.
Finally, it is (currently?) unreasonable to expect a hidden camera in your own home, but there is an increased amount of awareness of the public that telemetry exists and settings should be inspected if this poses a problem. People who do care to opt out will try to find out how to opt out.
Finally, it is (currently?) unreasonable to expect a hidden camera in your own home, but there is an increased amount of awareness of the public that telemetry exists and settings should be inspected if this poses a problem. People who do care to opt out will try to find out how to opt out.
I think this is rather deceptive. Basically it’s saying: “we know people would object to this, but if we slowly and covertly add it everywhere we can eventually say that we’re doing it because everyone is doing it and you’ve just got to deal with it”.
I still disagree but I upvoted your post for clearly laying out your argument in a reasonable way.
You seem to miss a very easy, obvious, opt-in only strategy that worked for the longest time without feeling like your software was that creepy uncle in the corner undressing everyone. As you pointed out everyone keeps the defaults, you know what else most normies do? Click next until they can start their software. So you add a dialog in that first run dialog that is supposed to be there to help the users and it has a simple “Hey we use telemetry to improve our software (here is where you can see your data)[https://yoursoftware.com/data] and our (privacy policy)[https://yoursoftware.com/privacy]. By checking this box you agree to telemetry and data collection as outlined in our (data collection policy)[https://yoursoftware.com/data_collection] [X]”
and boom you satisfy both conditions, the one where people don’t go out of their way to opt into data collection and the other where you’re not the creepy uncle in the corner undressing everyone.
This is a bad comment, because it doesn’t add anything except for “I think non-consensual tracking is bad”, and is only tangentially related to OP insofar as OP is used as a soapbox for the above sentiment. Therefor I have flagged the comment as “Me-too”, regardless however much I may agree with it.
Except that in the European Union, the GDPR requires opt-in in most cases. IANAL, but I think it applies to the analytics that Homebrew collects as well. From the Homebrew website:
A Homebrew analytics user ID, e.g. 1BAB65CC-FE7F-4D8C-AB45-B7DB5A6BA9CB. This is generated by uuidgen and stored in the repository-specific Git configuration variable homebrew.analyticsuuid within $(brew –repository)/.git/config.
https://docs.brew.sh/Analytics
From the GDPR:
The data subjects are identifiable if they can be directly or indirectly identified, especially by reference to an identifier such as a name, an identification number, location data, an online identifier or one of several special characteristics, which expresses the physical, physiological, genetic, mental, commercial, cultural or social identity of these natural persons.
I am pretty sure that this UUID falls under identification number or online identifier. Personally identifyable information may not be collected without consent:
Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement.
So, I am pretty sure that Homebrew is violating the GDPR and EU citizens can file a complaint. They can collect the data, but then they should have an explicit step during the installation and the default should (e.g. user hits RETURN) be to disable analytics.
The other interesting implication is that (if this is indeed collection of personal information under the GDPR) is that any user can ask Homebrew which data they collected and/or to remove the data. To which they should comply.
The data subjects are identifiable if they can be directly or indirectly identified, especially by […]
As far as I can tell, you’re not actually citing the GDPR (CELEX 32016R0679), but rather a website that tries to make it more understandable.
GDPR article 1(1):
This Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.
GDPR article 4(1) defines personal data (emphasis mine):
‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
Thus it does not apply to data about people that are netiher identified nor identifiable. An opaque identifier like 1BAB65CC-FE7F-4D8C-AB45-B7DB5A6BA9CB is not per se identifiable, but as per recital 26, determining whether a person is identifiable should take into account all means reasonably likely to be used, such as singling out, suggesting that “identifiable” in article 4(1) needs to be interpreted in a very practical sense. Recitals are not technically legally binding, but are commonly referred to for interpretation of the main text.
Additionally, if IP addresses are stored along with the identifier (e.g. in logs), it’s game over in any case; even before GDPR, IP addresses (including dynamically assigned ones) were ruled by the ECJ to be personal data in Breyer v. Germany (ECLI:EU:C:2016:779 case no. C-582/14).
Sorry for the short answer in my other comment. I was on my phone.
Thus it does not apply to data about people that are netiher identified nor identifiable. An opaque identifier like 1BAB65CC-FE7F-4D8C-AB45-B7DB5A6BA9CB is not per se identifiable,
The EC thinks differently:
Examples of personal data
a cookie ID;
the advertising identifier of your phone;*
https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-personal-data_en
It seems to me that an UUID is similar to cookie ID or advertising identifier. Using the identifier, it would also be trivially possible to link data. They use Google Analytics. Google could in principle cross-reference some application installs with Google searches and time frames. Based on the UUID they could then see all other applications that you have installed. Of course, Google does not do this, but this thought experimentat shows that such identifiers are not really anonymous (as pointed out in the working party opinion of 2014, linked on the EC page above).
Again, IANAL, but it would probably be ok to reporting installs without any identifier linking the installations. They could also easily do this, make it opt-in, report all people who didn’t opt in using a single identifier, generate a random identifier for people who opt-in.
They locked the PR talking about it and accused me of implying a legal threat for bringing it up. The maintainer who locked the thread seems really defensive about analytics.
Once you pop, you can’t stop.
I, too, thought that your pointing out their EU-illegal activity was distinct from a legal threat (presumably you are not a prosecutor), and that they were super lame for both mischaracterizing your statement and freaking out like that.
The maintainer who locked the thread seems really defensive about analytics.
It seems this is just a general trait. See e.g. this
Now I really wish I had an ECJ decision to cite because at this point it’s an issue of interpretation. What is an advertising identifier in the sense that the EC understood it when they wrote that page—Is it persistent and can it be correlated with some other data to identify a person? Did they take into account web server logs when noting down the cookie ID?
Interesting legal questions, but unfortunately nothing I have a clear answer to.
Please cite the rest of paragraph 4, definitions:
‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32016R0679
Which was what I quoted.
Your comment makes the following quotations:
The data subjects are identifiable if they can be directly or indirectly identified, especially by reference to an identifier such as a name, an identification number, location data, an online identifier or one of several special characteristics, which expresses the physical, physiological, genetic, mental, commercial, cultural or social identity of these natural persons.
Please ^F this entire string in the GDPR. I fail to find it as-is. They only start matching up in the latter half starting at “an identifier” and ending with “social identity”.
(1) ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
I agree it’s pedantic of me, but it’s not a 1:1 quote from the GDPR if a sentence is modified, no matter how small.
I’ve edited in the second half in any case though. I do not, however, see any way that modification would invalidate any of the points I’ve made there, however.
Or don’t submit a PR. As the project has stated:
Do not open new threads on this topic.
People have been banned from the project for doing exactly this.
Yeah, I got the impression that they are pretty hardline on this. I hope that they’ll reconsider before someone files a GDPR complaint.
Personally, I don’t really have a stake in this anymore, since I barely use my Mac.
I guess a more creative solution would be to fork the main repo and disable the analytics code and point people to that.
Edit: the linked PR is from before the GDPR though.
But the above user didn’t post that did they? Your comment was meaningful and useful, but theirs was just sentimental. A law violation is a law violation, but OP just posted their own feelings about what they think is spyware and didn’t say anything about GDPR.
hmm I disagree, the OP is claiming that we should have a unified standard for “Do_Not_Track”. Finn is arguing that we shouldn’t need such a standard because unless I specifically state that I would like to be tracked, I should not be tracked and that any attempts to track is a violation of consent. Finn here is specifically disagreeing with the website in question. Should we organize against attempts to track without explicit consent, or give a unified way to opt out. These are fundamentally different questions and are actually directly related. If I say everyone should be allowed into any yard unless they have a private property sign, that may cause real concern for people who feel that any yard shouldn’t permit trespassing unless they have explicit permission. They are different concerns, that are related, and are more nuanced than “thing is bad”.
Okay. By your (non-accepted) definition, spyware abounds and is in common use.
Simply calling it “spyware” and throwing up your hands doesn’t work. They have knobs to turn the spying off, to opt-out. I just want all those knobs to have the same label.
Well, one of my pet hates about phones these days is the sheer size of them – they’re basically a tablet in your pocket. I want a phone.
And I thought I was the only one on the planet going nuts about this. I also bought an iPhone SE 2 years ago for exactly the same reason. I don’t get what the fascination with large sized screens is. You can’t even operate the phone with one hand.
Anyway, I switched to iOS for similar reasons. Personally for me, the final straw was the Android permission system. Once an app has all the permissions, it has them all the time. Not sure if things have changed in the meanwhile, but I’m super happy that I switched.
Once an app has all the permissions, it has them all the time. Not sure if things have changed in the meanwhile, but I’m super happy that I switched.
The size of phones is just killing me, and they seem to be getting bigger. Hopefully the fashion will change soon.
Yeah, I cannot understand why smartphones are getting bigger and bigger. It’s like sane phone sizes ended with end of 2014-2015, especially if you want some good specs.
In 2014 you could buy Z1 Compact (dimensions: 127 x 64.9 x 9.5 mm (5.0 x 2.56 x 0.37 in); screen: 4.3 inches, 51.0 cm2 (~61.8% screen-to-body ratio)), now it’s practically impossible to buy similarly sized high-end smartphone.
~4.5” screen seems to be what is the limit of being able to somewhat comfortably operate smartphone in one hand using your thumb (unless you have some big hands, of course, but I don’t). With Redmi 2 (dimensions: 134 x 67 x 9 mm (5.28 x 2.64 x 0.35 in); screen: 4.7 inches, 60.9 cm2 (~67.8% screen-to-body ratio)) I’m actually already unable to reach top of the screen and 3 buttons below the screen with my thumb without slightly readjusting hand position.
BTW It’s equally ridiculous for me to put 3K screens in such a small size factor like smartphones (well, they’re reaching 6” screens already, but even including that). Going over FullHD seems quite wasteful and brings only more battery drain. I doubt there are people using their phones with magnifying glass…
My guess: more and more people are using phones as their only computer, so sizes will continue to increase in order to accommodate them.
Exactly this – me and my partner are basically polar opposites on this front. When I need to do anything beyond simple, brief content generation (text messages, and only short ones) – I reach for my laptop. The time to take it out of my bag, tether it to my phone, get online and do the work then put everything away is less than my slow speed on a phone.
Work wise – the phone is an accessory to my computer(s)… and mostly I use it as a well, phone, click, alarm and message receiving device.
My partner legitimately runs two businesses from their phone. The phone is the primary content generation device, email, texts, taking photos, editing images, scheduling, planning, online resources, looking at sales data, ordering new products for the workplace, etc, etc – everything is done on the phone first, and begrudgingly done on a laptop if a product just won’t work from their phone. That product is likely to have a short shelf life because having a great phone experience is probably the most important feature to them.
And I know people who use 17” laptops for impromptu presentations. Harder to find those these days.
For small table presentations (impromptu or otherwise), I love mirror mode to a 15.6” USB powered monitor, so I can sit behind my computer when presenting. Specifically great for like live coding / pair coding.
Yep. I think almost all of them use the generic DisplayLink USB 3 stuff – which works (might need to install a Displaylink generic driver).
I am not sure if I got lucky or what, but I haven’t had a major issue with it and been using them for a few years now on Ubuntu 16.04 and now 18.04 (various spins of it). I don’t use too many of the DisplayLink features (rotation, sound support, etc – so maybe that is where stuff gets hung up?).
~4.5” screen seems to be what is the limit of being able to somewhat comfortably operate smartphone in one hand using your thumb (unless you have some big hands, of course, but I don’t).
I agree. I switched from an iPhone SE to an iPhone 6s with its 4.7” screen, which is just a bit too large. Luckily, iOS has this handy feature where if you double tab (not press) the home button, it will move the image 50% down, making it easier to reach top buttons. I’d still love a 4.3 or 4.5” iPhone though. But it’s not where things are moving, so unlikely to ever happen (they even abandoned 4.7” on new models, yes I know, newer models have smaller bezels).
Funny enough the giant phone trend is being driven by Asians, specifically Chinese and Korean customers. They all want massive screens for some reason. Americans don’t really help the trend much but for once were not actually driving the bus.
That part was music to my ears as well :-) https://mastodon.social/@isagalaev/100981223084458245
We should start a consumer group or something (oh well, who am I kidding?)
Myself I am trying to get some documentation done for some of the table plugins for OpenSMTPD and some Exchange Web Services stuff down.
I am going to spend this week trying to figure out how to get myself one of those fancy hats that have been implemented on lobste.rs!
Also getting married! \o/
http://blog.corrupted.io. you can use HTTPS if you dont mind a cert error since it is a GH pages backed blog. its mostly about email systems like Exchange and OpenSMTPD.
I love the bullshit rhetoric from these ad execs:
“This is damaging to consumer interest and will undermine the Internet.”
“If Mozilla follows through on its plan … the disruption will disenfranchise every single Internet user,”
“All of us will lose the freedom to choose our own online experiences; we will lose the opportunity to monitor and protect our privacy”
I think they suffer from faulty metrics. I prefer targeted advertising to the broad ads that made me mute the TV for 18 minutes out of every hour in ancient times. Who wouldn’t? I’m sure that polls well in focus groups. But the cost is too high with the methods they use, and I would be surprised if people still wanted it after knowing the invasiveness of current methods.
I wish they would just say the truth instead of having to make me infer it, all the IAB has pretty much said is “WHA WAH WAH We wont be able to profit off violating firefox users privacy by default, we will have to convince firefox users to let us violate their privacy and work for our money.”
A tiny bit more nuanced here: http://marc.info/?l=openbsd-misc&m=133858003425034&w=2 (and rest of the thread seems relevant too).
In before someone claims they’ve got special eyes which are different from everyone else’s.
The most common complaint I hear is headaches. I’m sure there’s plenty of people who are very light sensitive and do get headaches, but most people should try turning the brightness down before putting themselves into that category.
Newer monitors are bright AF.
My wife was able to buy a monitor with someone else’s money, so I recommended she get the LG Ultrafine 5k. It’s an amazing monitor, and way too bright at default settings. We both would get headaches from using it until we knocked the brightness down a notch or two.
yeah I also discovered that with my new monitor, it actually improved a bunch when I changed to backlight color from cold to warm, but it skews color correctness on it.
Every time I have tried adjusting a modern monitor for comfort, I’ve put the brightness setting the lowest value offered by the firmware and it’s still too bright.
its not that someone does or does not have “special eyes” its a what are you optimizing for as always, because if you go look at the stats in a different direction using a cool colored backlight, light themes increase eyestrain and mess with your sleep cycles due to artificial blue light exposure, dark schemes reduce that. Light schemes are less power efficient on modern display panels so if you are optimizing for battery life guess what dark theme wins. Then when you get to “special eyes” there are a lot of people who have Myopia or Astigmatism, which dark themes can produce halation, chances are light themes are better for you there. If you are trying to present on a large screen having a theme that is somewhere in the middle, and a little darker for dark rooms and a little lighter for light rooms wins.
My point is more or less there is a ton of nuance here where saying “SEE SEE SEE SEE SEE SCIENCE PROVES MY PERSONAL PREFERENCES RIGHT” is wrong, and the author honestly skips over all of that to justify their own preferences for some reason.
“I’m right because science” is a tactic to insulate against criticism, it preemptively accuses you of “denying science” if you dare to disagree.