According to the article the data comes from a company collecting precise movements using software slipped onto mobile phone apps. So maybe installing/using less apps helps somewhat…
I would assume that disabling location permission for apps would also be sufficient. I can understand why maps needs my location, but not much else.
That said, the article does say the weather channel was also at this, and I guess a lot of people would give this more trust as its a default app on ios.
the weather channel was also at this, and I guess a lot of people would give this more trust as its a default app on ios.
Is it? I know there’s a weather app by default, but I thought the Weather Channel app was a separate download (not a regular iOS user here). I uninstalled The Weather Channel on my Android device after they were outed as scraping contacts and selling that off. No reason to install anything of theirs after that and now this as well.
Maybe not only convenience. What I noticed in my home country in Eastern Europe, is that people feel disempowered to fight the things that state doesn’t prohibit (or even worse, supports), but are obviously bad, like privacy intrusion. My take is that people perceive it as an uphill battle, mostly to be lost, as the state administration is slow, brutal on the nerves, and can go for eternity.
Wanted to say that the general sense I’ve had reading these articles is an almost false dichotomy between using technology and maintaining privacy.
However in the academic (but slowly being productionized) world, there’s been a lot of work in federated learning as a way to perform privacy preserving machine learning. For example, Google uses that in their G Board to train their models on the user’s device (as opposed to taking their text data to train in their servers & store that information).
I wonder if that’s going to offer a viable way for us to move forward that makes all parties happy (i.e. no centralized data collection for model building; instead do it all training and inference on the edge and find mechanisms to create incentives for users & companies such that users can opt-in to letting their data be used to train models, and companies can use those models to obtain whatever business objective they may have).
Certainly worth paying attention to moving forward, although not a silver bullet by any means.
P.S. If anyone’s in this space and interested in collaborating… drop me a note :).
Beautiful Visualisations here. I wonder if using non-smart phones can help in anyway.
According to the article the data comes from a company collecting precise movements using software slipped onto mobile phone apps. So maybe installing/using less apps helps somewhat…
After reading this, I deleted half the apps from my phone and disabled location services for most of the ones that were left.
I would assume that disabling location permission for apps would also be sufficient. I can understand why maps needs my location, but not much else.
That said, the article does say the weather channel was also at this, and I guess a lot of people would give this more trust as its a default app on ios.
Yes and no.
The wifi MAC address method is also becoming quite prevalent. If your phone has wifi just enabled it can be pinged. [1]
Then over the cell network your signal can be triangulated.
True, thats what skyhook does as well. Interestingly (and this might show a conflict of interest within Google) Android has started using random mac’s for wifi. https://source.android.com/devices/tech/connect/wifi-mac-randomization
Is it? I know there’s a weather app by default, but I thought the Weather Channel app was a separate download (not a regular iOS user here). I uninstalled The Weather Channel on my Android device after they were outed as scraping contacts and selling that off. No reason to install anything of theirs after that and now this as well.
This is beautifully visualized, topical, and well-written. I hope it can get more people to care.
Convenience seems to be rapidly eroding our civil liberties in various regards..
Maybe not only convenience. What I noticed in my home country in Eastern Europe, is that people feel disempowered to fight the things that state doesn’t prohibit (or even worse, supports), but are obviously bad, like privacy intrusion. My take is that people perceive it as an uphill battle, mostly to be lost, as the state administration is slow, brutal on the nerves, and can go for eternity.
Awesome visualizations, and decent reporting.
Wanted to say that the general sense I’ve had reading these articles is an almost false dichotomy between using technology and maintaining privacy.
However in the academic (but slowly being productionized) world, there’s been a lot of work in federated learning as a way to perform privacy preserving machine learning. For example, Google uses that in their G Board to train their models on the user’s device (as opposed to taking their text data to train in their servers & store that information).
I wonder if that’s going to offer a viable way for us to move forward that makes all parties happy (i.e. no centralized data collection for model building; instead do it all training and inference on the edge and find mechanisms to create incentives for users & companies such that users can opt-in to letting their data be used to train models, and companies can use those models to obtain whatever business objective they may have).
Certainly worth paying attention to moving forward, although not a silver bullet by any means.
P.S. If anyone’s in this space and interested in collaborating… drop me a note :).