Communities that used to have public presences, online forums, wikis, all of that reduced to private, secluded, Discord servers.
This is partially due to how easy it is to build a little bubble (either on Discord or a self-hosted platform) and, I would argue for many of us, due to the sheer liability that public discourse currently has. Pick one of your heroes, and then look at the treatment they get in the news and on online forums like this one–given the choice of contributing in public only to be bludgeoned with incorrect readings of posts or staying in a splinternet with people who have proven at least some level of good-faith rhetoric and intelligence, the decision is obvious to everyone who isn’t a masochist (hi).
The inquisitors have scared off the witches and the intellectuals. This should not be surprising–and it doesn’t matter besides, as people don’t actually want dissenting opinions, they want the warm fuzzy feeling of fitting in with their tribe. I’ll stop pulling on that thread, because it’s a little far out for where we are here on Lobsters at the moment.
Create the memes that you need in order to discredit your opposition. They’ll spread like wildfire.
This has been going on at least as far back as Gamergate, as 9/11, as yellow journalism in newspapers at the turn of the 20th century and before. Even Jesus got bad press. People have been slandering and libeling each other since the day that one human talked shit about the other to a third.
I hate to surprise you kind and gentle techies, but lying is not a new thing.
Our world is being engulfed in relative truths based on assumptions. The common belief in an “objective reality” is crumbling.
This has basically always been the case, and even if you limit your lookback window to thirty years, this crumbling of “objective reality” is not new.
~
Nice writing, but I don’t think there’s some new truth here so much as perhaps a distressing realization of something that has been happening for quite some time.
Long have humans lived with each other’s lies and slander. Long have we lived amid mass propaganda. Yet long have most people still trusted most of the world around them, whether by chance or by choice.
I agree with the spirit of grumpy caveat-ing on this topic, but I find the ever-louder chorus of mistrust interesting in and of itself. Long have most people trusted most of the world around them, and when the ratio of the trusting folk greatly changes, it does herald a different era. Not a different era as defined by the changes technologies, or by the actors using them, but a different era as carved by the changes in a majority of people, and how little they feel they can trust.
If some of us were ahead of the curve, we don’t get a medal for it, and the world still changes as our numbers swell and become dominant.
About the AI and Fakenews, my astrological prediction is that we will see a burst of these, and then they die down. Because let’s face it: We were always able to fake a lot of things that are taken for a fact in our daily lives. Like IDs, Signatures or Voices. Without any AI. It just seemed to be much more unlikely or people didn’t care as much. And faking news isn’t new either*. It was just less likely to be made by your friendly basement neighbor. If anything I hope it will create a little bit more awareness that you shouldn’t start a hate mob just because someone somewhere said that person X did Y.
( I always forget that I can’t post more than once, so I end up editing my posts to add more content about different topics )
*In germany a news outlet called “Bild” (EN Picture) is well known for turning headlines and framing stories so drastically, that they are nothing more than a propaganda machine. In the same way facebook was accused for showing people more and more inflammatory posts, because that is what’s creating views and clicks. Or various things documented, like the iphone rumors about “features” like “wireless charging via microwaving” and other things, which people believed. (For more political examples look up Wikipedia about (known) false claims of different states during various wars.)
“No Way To Prevent This,” Says Only Nation Where This Regularly Happens
I wonder about it in terms of an SNR problem. I agree with everything you’re saying, but wonder about the particular mechanics of these things “dying down”. Generally, they do so because the cost to produce significant, impactful noise grows high enough to make it no longer worth it.
I think we’ve already seen that this cost-benefit argument has been moving in the favor of disinformation since the start of the internet age and I suspect AI will “improve” it again.
So then we are looking for tooling to more quickly discover the signal amidst growing noise that is both cheap and difficult to filter. So we turn to AI to do that as well.
People have already discussed the realistic scenario where you use AI to generate a sufficiently “business polite” email which is then consolidated into practical bullet points via AI on the reader’s end.
I don’t really know what to imagine about the world if we enter into that scenario, but it’s still pretty concerning. Especially if there are winner-takes-all dynamics in AI and a single provider controls both the encoder and the decoder.
I can see the problem, but at the same time we have this problem of totally misinformed people since ages. What I mean with that is, that there are and always will be groups of people growing up with a completely different view on the world, up until the point that you may discover they believe into very weird stuff.
Random examples include various religions, sports, technology, customs, privacy, weight loss methods, fear of electromagnetic waves and homeopathic medicine.
And the “homeopathic medicine” is already a very strong contender: Virtually everyone will have learned household remedies which they use for illnesses such as the flu. Now the question is which of them are true and which are just plain useless. You can apply the same to various cooking habits. It all boils down to culture, your parents will have to teach you various things, and those may set you up for believes and a perception of the environment, that probably many other won’t share.
Edit: Another big example are the global warming and covid fake-facts. Both of which have papers and studies, and a ton of people believing otherwise. I personally know people that simply believe their “physics buddy” which tells them why global warming is a hoax or why double-glazed windows are just an idea of the “construction mafia”. I’m talking fully studied people in tech.
To make it short, without the modern communication every village has its own facts. And if you add newspapers and something like telegraphs, then you still have one “source of truth”. One oracle for the artificial intelligence in everyone to decide with.
I’m missing the piece of the puzzle that explains why we aren’t drowning in false news already. Most of the stories we read are from giant news agencies, simply pasted into every newsletter. I doubt anyone really fact-checks this, or that anyone really could. We clearly don’t need AI to get overrun by false news, and people don’t seem to need AI fakes to believe random peoples stories over anything else.
I’m missing the piece of the puzzle that explains why we aren’t drowning in false news already.
A lot of the stories from giant news agencies are false — consider the consensus in the high-reputation newspapers and cable news channels in 2003 that Iraq possessed weapons of mass destruction and was immanently likely to use them on the US. And while this false news can cause tremendous harm (one million dead Iraqis), it doesn’t cause internal social discord, because the messaging is all consistent; every village gets the same set of (false) facts. Today, there are more news sources, all of them with their own distinct reality tunnels. But the giant news agencies are still mostly on the same page because the social/economic structures underpinning them haven’t really changed.
And that’s one of the points I wanted to make: The difference between true and false facts is, to a good extent, just a global consensus. Unfortunately, the fear of misinformation does not answer where the “true facts” come from. To quote a German politician:
Part of these answers would unsettle the population
You example is a great view into the human psychology. We may not want the truth, because we couldn’t live with it.
I want to focus on the MGS analogy. The Patriots claim that they emerged from some sort of computational nexus inside the White House, but this is yet another lie. The Patriots started as a group of humans with a particular goal of controlling discourse and politics; the AIs that we encounter were built by those humans in order to automate and extend control.
In Illuminatus!, the Illuminati and other culture-jamming groups are trained to read the newspaper. This takes a lot of effort, because the newspaper is written to control its readers, and uses the scareword “fnord” to alter peoples’ memories and emotions as they read. In MGS, the Patriots use memes and nanites to prevent people from discussing the Patriots or otherwise rebelling. These are actually more connected than one might think: “fnord” is a placeholder, and the actual word used to scare people in Discordian traditions is “Communism”; while in MGR, we learn that the Patriots ally with capitalists and fascists in order to increase their control over the USA.
This game is also notable because it came out in that small sliver of time between modern gaming being a thing and the common definition of memes as what we know them as today didn’t exist yet.
If there were such a time, it was somewhere in the 90s. Communicable memes were hypothesized in the 90s; the word “meme” was borrowed from sociology and used for image macros later.
This is partially due to how easy it is to build a little bubble (either on Discord or a self-hosted platform) and, I would argue for many of us, due to the sheer liability that public discourse currently has. Pick one of your heroes, and then look at the treatment they get in the news and on online forums like this one–given the choice of contributing in public only to be bludgeoned with incorrect readings of posts or staying in a splinternet with people who have proven at least some level of good-faith rhetoric and intelligence, the decision is obvious to everyone who isn’t a masochist (hi).
The inquisitors have scared off the witches and the intellectuals. This should not be surprising–and it doesn’t matter besides, as people don’t actually want dissenting opinions, they want the warm fuzzy feeling of fitting in with their tribe. I’ll stop pulling on that thread, because it’s a little far out for where we are here on Lobsters at the moment.
This has been going on at least as far back as Gamergate, as 9/11, as yellow journalism in newspapers at the turn of the 20th century and before. Even Jesus got bad press. People have been slandering and libeling each other since the day that one human talked shit about the other to a third.
I hate to surprise you kind and gentle techies, but lying is not a new thing.
This has basically always been the case, and even if you limit your lookback window to thirty years, this crumbling of “objective reality” is not new.
~
Nice writing, but I don’t think there’s some new truth here so much as perhaps a distressing realization of something that has been happening for quite some time.
Long have humans lived with each other’s lies and slander. Long have we lived amid mass propaganda. Yet long have most people still trusted most of the world around them, whether by chance or by choice.
I agree with the spirit of grumpy caveat-ing on this topic, but I find the ever-louder chorus of mistrust interesting in and of itself. Long have most people trusted most of the world around them, and when the ratio of the trusting folk greatly changes, it does herald a different era. Not a different era as defined by the changes technologies, or by the actors using them, but a different era as carved by the changes in a majority of people, and how little they feel they can trust.
If some of us were ahead of the curve, we don’t get a medal for it, and the world still changes as our numbers swell and become dominant.
About the AI and Fakenews, my astrological prediction is that we will see a burst of these, and then they die down. Because let’s face it: We were always able to fake a lot of things that are taken for a fact in our daily lives. Like IDs, Signatures or Voices. Without any AI. It just seemed to be much more unlikely or people didn’t care as much. And faking news isn’t new either*. It was just less likely to be made by your friendly basement neighbor. If anything I hope it will create a little bit more awareness that you shouldn’t start a hate mob just because someone somewhere said that person X did Y.
( I always forget that I can’t post more than once, so I end up editing my posts to add more content about different topics )
*In germany a news outlet called “Bild” (EN Picture) is well known for turning headlines and framing stories so drastically, that they are nothing more than a propaganda machine. In the same way facebook was accused for showing people more and more inflammatory posts, because that is what’s creating views and clicks. Or various things documented, like the iphone rumors about “features” like “wireless charging via microwaving” and other things, which people believed. (For more political examples look up Wikipedia about (known) false claims of different states during various wars.)
I see what you did there
I wonder about it in terms of an SNR problem. I agree with everything you’re saying, but wonder about the particular mechanics of these things “dying down”. Generally, they do so because the cost to produce significant, impactful noise grows high enough to make it no longer worth it.
I think we’ve already seen that this cost-benefit argument has been moving in the favor of disinformation since the start of the internet age and I suspect AI will “improve” it again.
So then we are looking for tooling to more quickly discover the signal amidst growing noise that is both cheap and difficult to filter. So we turn to AI to do that as well.
People have already discussed the realistic scenario where you use AI to generate a sufficiently “business polite” email which is then consolidated into practical bullet points via AI on the reader’s end.
I don’t really know what to imagine about the world if we enter into that scenario, but it’s still pretty concerning. Especially if there are winner-takes-all dynamics in AI and a single provider controls both the encoder and the decoder.
I can see the problem, but at the same time we have this problem of totally misinformed people since ages. What I mean with that is, that there are and always will be groups of people growing up with a completely different view on the world, up until the point that you may discover they believe into very weird stuff.
Random examples include various religions, sports, technology, customs, privacy, weight loss methods, fear of electromagnetic waves and homeopathic medicine.
And the “homeopathic medicine” is already a very strong contender: Virtually everyone will have learned household remedies which they use for illnesses such as the flu. Now the question is which of them are true and which are just plain useless. You can apply the same to various cooking habits. It all boils down to culture, your parents will have to teach you various things, and those may set you up for believes and a perception of the environment, that probably many other won’t share.
Edit: Another big example are the global warming and covid fake-facts. Both of which have papers and studies, and a ton of people believing otherwise. I personally know people that simply believe their “physics buddy” which tells them why global warming is a hoax or why double-glazed windows are just an idea of the “construction mafia”. I’m talking fully studied people in tech.
To make it short, without the modern communication every village has its own facts. And if you add newspapers and something like telegraphs, then you still have one “source of truth”. One oracle for the
artificialintelligence in everyone to decide with.I’m missing the piece of the puzzle that explains why we aren’t drowning in false news already. Most of the stories we read are from giant news agencies, simply pasted into every newsletter. I doubt anyone really fact-checks this, or that anyone really could. We clearly don’t need AI to get overrun by false news, and people don’t seem to need AI fakes to believe random peoples stories over anything else.
A lot of the stories from giant news agencies are false — consider the consensus in the high-reputation newspapers and cable news channels in 2003 that Iraq possessed weapons of mass destruction and was immanently likely to use them on the US. And while this false news can cause tremendous harm (one million dead Iraqis), it doesn’t cause internal social discord, because the messaging is all consistent; every village gets the same set of (false) facts. Today, there are more news sources, all of them with their own distinct reality tunnels. But the giant news agencies are still mostly on the same page because the social/economic structures underpinning them haven’t really changed.
And that’s one of the points I wanted to make: The difference between true and false facts is, to a good extent, just a global consensus. Unfortunately, the fear of misinformation does not answer where the “true facts” come from. To quote a German politician:
You example is a great view into the human psychology. We may not want the truth, because we couldn’t live with it.
Then we should die, no? Otherwise, what’s the point?
Make your own decision how much suffering in the world you are ready to experience, and where you buy the cheap product from children’s hands.
Previously, on Lobsters.
I want to focus on the MGS analogy. The Patriots claim that they emerged from some sort of computational nexus inside the White House, but this is yet another lie. The Patriots started as a group of humans with a particular goal of controlling discourse and politics; the AIs that we encounter were built by those humans in order to automate and extend control.
In Illuminatus!, the Illuminati and other culture-jamming groups are trained to read the newspaper. This takes a lot of effort, because the newspaper is written to control its readers, and uses the scareword “fnord” to alter peoples’ memories and emotions as they read. In MGS, the Patriots use memes and nanites to prevent people from discussing the Patriots or otherwise rebelling. These are actually more connected than one might think: “fnord” is a placeholder, and the actual word used to scare people in Discordian traditions is “Communism”; while in MGR, we learn that the Patriots ally with capitalists and fascists in order to increase their control over the USA.
If there were such a time, it was somewhere in the 90s. Communicable memes were hypothesized in the 90s; the word “meme” was borrowed from sociology and used for image macros later.
This is one of the coolest thing I’ve read in a long time. Big fan of your writing.