Speaking of Firefox, I need an old version of a Firefox (or any other browser) that will connect to older devices on my own networks (NAS, printer, WAP, etc) that don’t have current certificates. Does anyone have a recommendation of where to safely download such?
Many years ago we installed new PLC-controlled fuel oil day tanks and pump sets for emergency generators. They stopped working after one month. Turns out after one month you needed to enter a six-digit pass code that the manufacturer only provided once they got paid. We had paid the distributor in full but they were withholding funds from the manufacturer due to back charges from a number of other customers. That’s when we learned how to download a PLC program, investigate the ladder logic, bypass the check, and reupload the program. It was pretty easy once you bought the $1000 cable.
I guess the PLC program in this case was encrypted or obfuscated because we didn’t need an oscilloscope.
My personal web site, including my blog; my address book; my calendar; my podcast player; my bookmarks manager; etc. All of my apps are based on an in-memory graph database I wrote on top of MIT Scheme. It’s a great environment for exploring and learning ideas incrementally.
Very cool! It’s amazing that you’re so productive that you can make small-scale applications for yourself to consume. I’ve always struggled to get anything larger than a few lines of code working for myself, especially nowadays.
Pushing conditions into data is great for readability, but the conditionals don’t disappear, they are just hidden in the dict or find library functions. For the grades, however, the solution depends on the order of rows in your array being monotonic in the key, so it introduces a new hazard for maintenance. So, either sort the array first or stick with a dict because you have to either compare the grade against all the keys or all the keys against each other. Or put a comment in the array data and hope the maintainer reads it.
they are just hidden in the dict or find library functions.
I feel like this is missing the point that these library functions are a lot more reliable than hand-writing it each time.
For the grades, however, the solution depends on the order of rows in your array being monotonic in the key, so it introduces a new hazard for maintenance.
The order dependence was there before the array too.
What I see as the main benefit is that it provides less remove for edge cases. Each condition has the same workflow, so why write out each branch out & leave room for someone to stick in a special case specific to 1 branch? When edge cases arise, its worth reconsidering the big picture transformation, laying it out in data forces you to consider that.
There is also a performance tradeoff here that will vary hugely between languages. In something like Python, dictionary lookup is aggressively optimised because everything else in the language depends on it. Turning executing a bunch of conditionals in CPython into a single dictionary lookup will almost certainly be faster. In contrast, a C++ compiler is good at optimising nested branches into tree searches or jump tables and so may perform better than a data structure, unless that data structure is fully inlined and converts to the same thing. In JavaScript, I have no idea which will be better for this week’s version of v8 and I strongly suspect it alternates over time.
for very large lookups, most compilers will not do better than a radix tree search on prefix matching, whereas most map data structures will be some kind of hash table and so give constant-time worst-case complexity. If you’re lucky, your compiler will generate a perfect hash function and use that.
wherein he says: a great way to tell if a site is engaged in poor security practices to use the ‘forgotten password’ function. If you get an email containing your old, forgotten password, something is up. They shouldn’t know it to be able to send it.
This reminds me of a similar pause I get when I change a password and the site says my new password is too similar to my old password, or one of my n most recent passwords. I’m pretty sure they can’t get that from comparing salted hashes, or even unsalted hashes. So they must be storing my passwords in plain text, right?
Would appreciate if someone would confirm or disprove this.
I haven’t tried to do this, but if there are text-focused perceptual hash functions I guess they could do this more or less how image similarity is done.
I can imagine some other ways to approach it, like chunking the password or computing hashes for common iteration schemes at password-change time.
So I wouldn’t say must, but it’s certainly possible. Regardless, I’m skeptical that there’s any way to implement this that doesn’t round up to leverage for an attacker with a database dump?
I agree that the current state-of-the-art AI text-generation systems do not have the ability to generate text that people can have confidence is stating true facts about the world; I just don’t see this as a problem. It’s always been possible for human writers to write fiction, to write lies masquerading as the truth in order to mislead people, and to write complete bullshit in this article’s sense of disregard for truth value; and LLMs can produce text that fulfill all these functions, depending on what the human user asks of it. People certainly shouldn’t automatically believe that the text a generative model produces has any connection to the truth of the world; but this is true for every genre of text that has ever existed.
But Frankfurt’s book is not the only classic text on bullshit. David Graeber’s famous analysis of Bullshit Jobs explains precisely why Elon Musk’s claim to the British PM is so revealing of the true nature of AI. Graeber revealed that over 30% of British workers believe their own job contributes nothing of any value to society. These are people who spend their lives writing pointless reports, relaying messages from one person to another, or listening to complaints they can do nothing about. Every part of their job could easily be done by ChatGPT.
I’ve never been convinced by Graeber’s writing on bullshit jobs. Even if an employee dislikes their job, which all sorts of people do for all sorts of reasons, someone finds it worthwhile to pay them to do it. And when the institution that was previously employing a person realizes that they were wrong and actually it really isn’t worthwhile to pay someone to do that job, we typically call this a “layoff”, and most people subject to them aren’t pleased about it. If the reason they’re being laid off is because ChatGPT actually can do their job as well as a human could (or at least for a more favorable price/quality ratio), we typically call this “technological unemployment”, and this is also a thing the people subject to it are generally not very happy about.
I’ve actually personally been in a position where I disliked a lot of aspects of my job, was considering quitting and doing something else, and then found myself laid off and being kinda happy about it. And even in that case, I would not say that my job was bullshit in any meaningful sense - if nothing else, the paycheck money hitting my bank account every two weeks was very much not bullshit to me.
People certainly shouldn’t automatically believe that the text a generative model produces has any connection to the truth of the world; but this is true for every genre of text that has ever existed.
But we needn’t automate the bullshit any more than we should create machines that overload our septic systems.
It’s always been possible for human writers to write fiction, to write lies masquerading as the truth in order to mislead people, and to write complete bullshit in this article’s sense of disregard for truth value
Have you ever tried to write convincing nonsense that leads people to a particular belief? It’s not easy. If it were, hiring an advertising company would be a lot cheaper. The problem is that it is increasingly as simple as providing a prompt to a tool stating what you want.
Even if an employee dislikes their job, which all sorts of people do for all sorts of reasons, someone finds it worthwhile to pay them to do it.
There is a theory of management which suggests that the main reason managers hire employees is to increase headcount. In Bullshit Jobs, this is referred to as “managerial feudalism.” According to this theory, hiring is not done because it is “worthwhile;” it is not done to save the business money by acquiring specialized labor, nor to free up managers from their current tasks so that the business can grow. Rather, hiring is done in order to flatter the egos of individual hiring managers and directors.
And even in that case, I would not say that my job was bullshit in any meaningful sense - if nothing else, the paycheck money hitting my bank account every two weeks was very much not bullshit to me.
That’s still a bullshit job, though. If one doesn’t get paid, then one wasn’t employed; a job implies compensation.
It’s worth noting that a lot of this kind of behaviour depends on timescales. A company that hires people to work bullshit jobs will have higher costs than one that doesn’t and so the stable state is for the company employing bullshit workers to go out of business. It can take a very long time to reach that stable state. It’s taken two hundred years since the Industrial Revolution for the idea that managers should understand intrinsic and extrinsic motivation and work to get the best out of their employees to become mainstream. Even then, it isn’t universal.
On empire building, there are also factors that work to propagate this. Someone who builds a big empire gets promoted and then moves to another company. The new company has no visibility into how many of the manager’s reports were working bullshit jobs and so that culture moves over to the new company. That manager then (subtly) encourages his direct reports to grow teams as a goal in and of itself. Folks I know at Google are complaining that Google has been hiring a load of ex-Microsoft people with exactly this mindset, but when you get enough of them you end up promoting people based on these criteria.
Big companies take a long time to fail. IBM made terrible decisions for 20 years and is still around, though a fraction of its former power. This kind of change happens over a period of decades, not years.
I’ve been wondering if part of the problem is that the world “feels” (subjectively) overwhelmingly complex to many people. If thinking about what’s true causes anxiety and stress, is it any wonder that people are happy to accept things that seem (again, subjectively), or feel right? Part of this might be that the world got really complicated, but I wonder if people also got less able to handle the complexity for some reason.
What if the problem is that the world is in actually overwhelmingly complex, and the many people who feel that way about it are correct to feel that way? What if optimizing for truth above all other concerns has only ever been what a small minority of unusual people has done?
An LLM can’t begin to optimize for truth because it has no model of the behavior of consciousness systems and therefore can’t evaluate the speaker’s statements within the context of their motivation, let alone broader contexts.
That’s an interesting take on it. My gut reaction was to say “well no one, until recently, believed something as stupid as the government is putting microchips in the Covid vaccines”, but then I reflected a bit and realized that people have always believed some pretty dumb (based on my perspective) things. I do think, though, that even if you’re right, technology and population have magnified the collective choices people make, so now we’re stuck in a position where we need more people to be concerned with truth.
If you ever encounter a person taking the ‘the government/“police” should be able to read any communications to stop [crime/pedos/whatever]’ position, ask them if they would be ok with a government regulation mandating cameras in every room of every building (your home, your workplace, your religious building of choice, bathrooms, etc).
It’s an obviously abhorrent concept, but unlike breaking security of pretty much everything you do regularly, rolling back decades of improvements to security that allows, say, online banking, the “cameras inside your home” law has the benefit of actually helping solve and prevent crimes, because you can’t trivially circumvent it.
That’s my point though: they can have a face to face conversation, but there’s conveniently a camera there. There’s no potential for abuse in my perfect system :D
Spies and organized crime run in the same circles, and often make use of each other, voluntarily and otherwise. In some countries they are the same people. Giving spies the master keys will inevitably result in those keys being in the possession of the criminals. It’s the average person who will pay.
Don’t believe me? Look at civil forfeiture. It was enacted as a way to keep the biggest drug dealers from using their ill-gotten wealth to escape justice. In practice, it’s just a bunch of local corrupt cops taking money from small-time criminals and perfectly innocent people who just happen to not have the resources to buy their own justice back.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
Speaking of Firefox, I need an old version of a Firefox (or any other browser) that will connect to older devices on my own networks (NAS, printer, WAP, etc) that don’t have current certificates. Does anyone have a recommendation of where to safely download such?
Try here. There are more options here.
Thanks! I needed this so badly.
How old are we talking? If it’s old enough that it may have ended up on shovelware/magazine CDs, archive.org might carry some of them.
Senator Ron Wyden (D-OR) is a national treasure. He is consistently on the right side of tech issues.
Not off-topic. Senator Wyden is mentioned in the article and is the reason why we know about any of this.
Many years ago we installed new PLC-controlled fuel oil day tanks and pump sets for emergency generators. They stopped working after one month. Turns out after one month you needed to enter a six-digit pass code that the manufacturer only provided once they got paid. We had paid the distributor in full but they were withholding funds from the manufacturer due to back charges from a number of other customers. That’s when we learned how to download a PLC program, investigate the ladder logic, bypass the check, and reupload the program. It was pretty easy once you bought the $1000 cable.
I guess the PLC program in this case was encrypted or obfuscated because we didn’t need an oscilloscope.
My best guess is the spinning circle hypnotized the Grammarly AI.
I made a mistake this summer that could have cost me $500,000, but it was okay, I “only” broke a $2500 part.
Even Obama knew: https://www.righto.com/2012/11/obama-on-sorting-1m-integers-bubble.html
MIT Scheme. I’ve been using it since 1984, and it keeps getting better.
I’m curious: what kind of things do you use it for?
My personal web site, including my blog; my address book; my calendar; my podcast player; my bookmarks manager; etc. All of my apps are based on an in-memory graph database I wrote on top of MIT Scheme. It’s a great environment for exploring and learning ideas incrementally.
Very cool! It’s amazing that you’re so productive that you can make small-scale applications for yourself to consume. I’ve always struggled to get anything larger than a few lines of code working for myself, especially nowadays.
Yes! And thanks to gjs and cph for it!
Pushing conditions into data is great for readability, but the conditionals don’t disappear, they are just hidden in the
dict
orfind
library functions. For the grades, however, the solution depends on the order of rows in your array being monotonic in the key, so it introduces a new hazard for maintenance. So, either sort the array first or stick with adict
because you have to either compare the grade against all the keys or all the keys against each other. Or put a comment in the array data and hope the maintainer reads it.The author acknowledges this.
I feel like this is missing the point that these library functions are a lot more reliable than hand-writing it each time.
The order dependence was there before the array too.
What I see as the main benefit is that it provides less remove for edge cases. Each condition has the same workflow, so why write out each branch out & leave room for someone to stick in a special case specific to 1 branch? When edge cases arise, its worth reconsidering the big picture transformation, laying it out in data forces you to consider that.
There is also a performance tradeoff here that will vary hugely between languages. In something like Python, dictionary lookup is aggressively optimised because everything else in the language depends on it. Turning executing a bunch of conditionals in CPython into a single dictionary lookup will almost certainly be faster. In contrast, a C++ compiler is good at optimising nested branches into tree searches or jump tables and so may perform better than a data structure, unless that data structure is fully inlined and converts to the same thing. In JavaScript, I have no idea which will be better for this week’s version of v8 and I strongly suspect it alternates over time.
for very large lookups, most compilers will not do better than a radix tree search on prefix matching, whereas most map data structures will be some kind of hash table and so give constant-time worst-case complexity. If you’re lucky, your compiler will generate a perfect hash function and use that.
Good read, thanks. Went from that to read this:
https://mike-sheward.medium.com/passwords-in-logs-why-what-and-how-8ab7a6d18db6
wherein he says: a great way to tell if a site is engaged in poor security practices to use the ‘forgotten password’ function. If you get an email containing your old, forgotten password, something is up. They shouldn’t know it to be able to send it.
This reminds me of a similar pause I get when I change a password and the site says my new password is too similar to my old password, or one of my n most recent passwords. I’m pretty sure they can’t get that from comparing salted hashes, or even unsalted hashes. So they must be storing my passwords in plain text, right?
Would appreciate if someone would confirm or disprove this.
N most recent should be doable by storing the n most recent hashes. The too similar thing is more suspect.
I haven’t tried to do this, but if there are text-focused perceptual hash functions I guess they could do this more or less how image similarity is done.
I can imagine some other ways to approach it, like chunking the password or computing hashes for common iteration schemes at password-change time.
So I wouldn’t say must, but it’s certainly possible. Regardless, I’m skeptical that there’s any way to implement this that doesn’t round up to leverage for an attacker with a database dump?
Emacs
curl
Wikimedia
Cockatrice
I agree that the current state-of-the-art AI text-generation systems do not have the ability to generate text that people can have confidence is stating true facts about the world; I just don’t see this as a problem. It’s always been possible for human writers to write fiction, to write lies masquerading as the truth in order to mislead people, and to write complete bullshit in this article’s sense of disregard for truth value; and LLMs can produce text that fulfill all these functions, depending on what the human user asks of it. People certainly shouldn’t automatically believe that the text a generative model produces has any connection to the truth of the world; but this is true for every genre of text that has ever existed.
I’ve never been convinced by Graeber’s writing on bullshit jobs. Even if an employee dislikes their job, which all sorts of people do for all sorts of reasons, someone finds it worthwhile to pay them to do it. And when the institution that was previously employing a person realizes that they were wrong and actually it really isn’t worthwhile to pay someone to do that job, we typically call this a “layoff”, and most people subject to them aren’t pleased about it. If the reason they’re being laid off is because ChatGPT actually can do their job as well as a human could (or at least for a more favorable price/quality ratio), we typically call this “technological unemployment”, and this is also a thing the people subject to it are generally not very happy about.
I’ve actually personally been in a position where I disliked a lot of aspects of my job, was considering quitting and doing something else, and then found myself laid off and being kinda happy about it. And even in that case, I would not say that my job was bullshit in any meaningful sense - if nothing else, the paycheck money hitting my bank account every two weeks was very much not bullshit to me.
But we needn’t automate the bullshit any more than we should create machines that overload our septic systems.
Have you ever tried to write convincing nonsense that leads people to a particular belief? It’s not easy. If it were, hiring an advertising company would be a lot cheaper. The problem is that it is increasingly as simple as providing a prompt to a tool stating what you want.
There is a theory of management which suggests that the main reason managers hire employees is to increase headcount. In Bullshit Jobs, this is referred to as “managerial feudalism.” According to this theory, hiring is not done because it is “worthwhile;” it is not done to save the business money by acquiring specialized labor, nor to free up managers from their current tasks so that the business can grow. Rather, hiring is done in order to flatter the egos of individual hiring managers and directors.
That’s still a bullshit job, though. If one doesn’t get paid, then one wasn’t employed; a job implies compensation.
It’s worth noting that a lot of this kind of behaviour depends on timescales. A company that hires people to work bullshit jobs will have higher costs than one that doesn’t and so the stable state is for the company employing bullshit workers to go out of business. It can take a very long time to reach that stable state. It’s taken two hundred years since the Industrial Revolution for the idea that managers should understand intrinsic and extrinsic motivation and work to get the best out of their employees to become mainstream. Even then, it isn’t universal.
On empire building, there are also factors that work to propagate this. Someone who builds a big empire gets promoted and then moves to another company. The new company has no visibility into how many of the manager’s reports were working bullshit jobs and so that culture moves over to the new company. That manager then (subtly) encourages his direct reports to grow teams as a goal in and of itself. Folks I know at Google are complaining that Google has been hiring a load of ex-Microsoft people with exactly this mindset, but when you get enough of them you end up promoting people based on these criteria.
Big companies take a long time to fail. IBM made terrible decisions for 20 years and is still around, though a fraction of its former power. This kind of change happens over a period of decades, not years.
I’ve been wondering if part of the problem is that the world “feels” (subjectively) overwhelmingly complex to many people. If thinking about what’s true causes anxiety and stress, is it any wonder that people are happy to accept things that seem (again, subjectively), or feel right? Part of this might be that the world got really complicated, but I wonder if people also got less able to handle the complexity for some reason.
What if the problem is that the world is in actually overwhelmingly complex, and the many people who feel that way about it are correct to feel that way? What if optimizing for truth above all other concerns has only ever been what a small minority of unusual people has done?
An LLM can’t begin to optimize for truth because it has no model of the behavior of consciousness systems and therefore can’t evaluate the speaker’s statements within the context of their motivation, let alone broader contexts.
That’s an interesting take on it. My gut reaction was to say “well no one, until recently, believed something as stupid as the government is putting microchips in the Covid vaccines”, but then I reflected a bit and realized that people have always believed some pretty dumb (based on my perspective) things. I do think, though, that even if you’re right, technology and population have magnified the collective choices people make, so now we’re stuck in a position where we need more people to be concerned with truth.
the best memes
I added a webring function to one of my projects, but for now I own the only instance of this project, and I’m all alone on the webring :(
Webouroborus
Pretty interesting even though I haven’t the foggiest what Nix is.
If you ever encounter a person taking the ‘the government/“police” should be able to read any communications to stop [crime/pedos/whatever]’ position, ask them if they would be ok with a government regulation mandating cameras in every room of every building (your home, your workplace, your religious building of choice, bathrooms, etc).
It’s an obviously abhorrent concept, but unlike breaking security of pretty much everything you do regularly, rolling back decades of improvements to security that allows, say, online banking, the “cameras inside your home” law has the benefit of actually helping solve and prevent crimes, because you can’t trivially circumvent it.
It’s well-known that spies and terrorists often communicate face to face. Therefore we should monitor every conversation! /s
That’s my point though: they can have a face to face conversation, but there’s conveniently a camera there. There’s no potential for abuse in my perfect system :D
Spies and organized crime run in the same circles, and often make use of each other, voluntarily and otherwise. In some countries they are the same people. Giving spies the master keys will inevitably result in those keys being in the possession of the criminals. It’s the average person who will pay.
Don’t believe me? Look at civil forfeiture. It was enacted as a way to keep the biggest drug dealers from using their ill-gotten wealth to escape justice. In practice, it’s just a bunch of local corrupt cops taking money from small-time criminals and perfectly innocent people who just happen to not have the resources to buy their own justice back.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Relevant XKCD: https://xkcd.com/651/
Interesting. What exactly motivates them then?
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
Start with Oregon senator Ron Wyden.
Power.
thank you so much for this!
Willing to do all the work, but for all the emails, and I don’t blame them.
if a domain wants to block adblockers, that domain should have legal liability for any malware that comes from the ads.