1. 40
    1. 14

      I co-authored this (we didn’t choose the article title, though).

      1. 7

        Yeah they did a number on you there because chatGPT is always going to be too expensive. The interesting takeaway is that we are already getting to the point where we can run half decent AI on consumer hardware.

        I guess ‘LLM’ scams is too nerdy but how about this: “AI is coming for your bank account” ;)

        1. 11

          Yeah, there’s a tradeoff in giving a piece like this a title – does the average reader understand it vs. does it get at the core nugget / idea?

          I think one of our main takeaways is that scam followup, not initiation, is now made way easier by conversational LLMs, the fact that they can be run on consumer hardware, and the fact that they confidently BS their way through anything is a feature not a bug.

          1. 9

            I was talking to a scammer selling car parts (that they didn’t have) the other day, and it took me five or six emails to realise they were full of shit. My first thought was, man if this were chatGPT I bet I would have wasted a lot more time on it

            1. 3

              Exactly, yeah – saves them time, wastes your time, and has a higher chance of success than an unsophisticated scammer.

              1. 3

                I was thinking about this – I guess with LLM-based generated text we may no longer be able to distinguish a scam email than a real one. No more PayPall or silly typos. :-)

                1. 3

                  The silly typos are on purpose to groom their marks. The idiots don’t notice the typos and those are exactly the people they don’t want to target. Chesteron’s Typos.

                  1. 8

                    But the typos are there only because less gullible responders waste scammers’ time. If scamming is automated, they can go after harder-to-scam victims too.

                    1. 1

                      I know you meant “don’t waste”. Yeah the existence of the LLMs mean that an LLM can stalk and customize their attack to the individual prey. Romance scams will rise. They could construct bots to clone the voice of existing phone support people and their methods. World is getting cyberpunk way too quickly.

        2. 5

          That “always” is not going to age well. It is already rapidly advancing towards running on cheaper hardware. It’s likely already cheaper for triaging spam responses and followups than manual labor, and is more scalable.

          1. 10

            That’s the nuance in the comment and the discussion about the title. ChatGPT is the model by OpenAI. It is expensive not because it requires beefy hardware to run, but because OpenAI can charge whatever they want.

            But it’s not the only LLM in town and Facebook’s leaked LLaMA can be run by anyone without paying licensing costs. That’s the cheap LLM for anyone to run, but it’s not ChatGPT.

            1. 11

              That’s the nuance in the comment and the discussion about the title. ChatGPT is the model by OpenAI. It is expensive not because it requires beefy hardware to run, but because OpenAI can charge whatever they want.

              I can’t share the exact costs, but from what I’ve heard internally, the margins on most of the OpenAI-based products in Azure are much lower than you might expect. These things really are expensive to run (especially if you factor in the amortised cost of the insane cost of training, but even without that the compute costs for each client running inference are pretty large). My cynical side says that this is why Azure is so interested in them: consumer demand for general-purpose compute is not growing that fast, if you can persuade everyone that their business absolutely depends on something that can only run on expensive accelerators then you’ve got a great business model (and, since you’re not using them all of the time, it’s much cheaper to rent time on cloud services than to buy a top-of-the-line NVIDIA GPU for every computer in your org).

              1. 4

                Yep. My rumor mill is even more dire; scuttlebutt is that OpenAI is losing money on every query! They’re only accepting the situation because of hopes that the cost of inference (in the cloud) will continue to fall.

              2. 2

                I’m surprised they’re already running at a profit! (Unless you mean the margins are an even lower negative than the negative I previously thought they were.)

    2. 8

      My two go-to examples of AI risk of the present generation of LLM technology is romance scams (akin to this article) and radicalization.

      Teenagers get radicalized on internet forums all the time, by arguments that could very well be parroted by a suitable trained LLM. I’m not at all looking forward to learning what happens when a suitably motivated person figures out a way to automate that.

      1. 3

        What’s wrong with being radical / fundamentalist? It’s the spreading of lies and hatred that is problematic. I enjoy spending time with works of many radical free software proponents.

        I have had meaningful conversations with tactful, radical believers of capitalism as well.

        1. 9

          Such bot would be a form of deception and manipulation. The kinds of people who wouldn’t mind using it tend not to be the radically nice people.

          1. 4

            My AI worshipping cult is always nice. Join us! :-) Or go to the bad place when the Basilisk wakes up. :-(

          2. 3

            That’s pretty confident statement. Why not build bots spreading factual information, promoting kind behavior and unleash them on unsuspecting online communities?

            1. 2

              There’s very little profit motive in doing so.

              1. 1

                There is not a lot of profit motive to preserving private communications, is there?

                1. 2

                  I agree, there isn’t.

      2. 2

        https://chatcgt.fr/

        Speaking of which, we have another national protest planned Thursday in France…

    3. 6

      This is obviously bad news for the elderly and the people who are not very tech literate.

      While visiting my elderly parents recently, I noticed how good they were at spotting spam texts, but were using the poor grammar and spelling to spot them.

      LLMs take that whole detection vector away, and leave elderly people a lot more vulnerable to scams.

      It’s a lot harder to teach people who are less comfortable with technology about typo-squatting, SSL certificates and what not, and a lot of the “bad feelings” we experience when being contacted in ways that should not be expected from e.g. banks or amazon rely on our own literacy in tech.

      1. [Comment removed by author]

    4. 4

      I am waiting for the chat GPT robocalls that will be part of the next election cycle. Both SMS and voice calls that just have a whole conversation with you about whatever stupid thing their candidate wants you to talk about.

      The days of hiring a pool of volunteers to make calls are over

    5. 2

      This also seems a great opportunity to create better scam detection. ChatGPT could actually detect that someone is trying to manipulate your grandpa with some well written text that have clear red flags that some people don’t notice.

    6. 1

      I’m starting to see this as well. On a discourse forum I moderate a lot of posts now end up in the queue with “As a language model, I do not have access to…”. The ones that end up in the spam queue are clearly low-effort, but I wonder what these posts may end up being detected with a little human editing done, or a better prompt for the LLM.

    7. 1

      I don’t see how this will change anything… There have always been good and bad scammers. In the end, I believe if you understand how your bank or expenses communicate with you, and how you can verify communication, there is nothing to brace for. If anything this may just make email domains more important to flag as “verified” or “unverified”. I would be more worried about receiving physical mail that looks a lot like official letters from your bank.

      1. 7

        I don’t see how this will change anything

        Most scams targeted against my nationality are very easily spotted, simply because their German is that bad. Those LLMs are literally made to spew out text. And based on my little experience with them, they would make a really good scam writer in native German. They can easily hold a conversation without getting weird, or running out of interest because it takes too long to get you. If you set those up correctly, you can operate a giant pipeline of scammy bots. Who cares if it takes half a year to befriend you, just run enough of them in parallel, you will win eventually. Also the CodeGPTs make it even easier to spew out fake websites (relevant for good scam mails) that operate well enough to fool you. It’s not about being perfect, it is about making it even easier to find the few required needles in the haystack, that actually give you money.

        You know the most successful scam here currently ? An SMS with “Hello I’m your daughter, I lost my phone, can you text me on whatsapp to this number?” What if you train those bots to be good enough to look like a typical teenager in problems ? Heck you can even let this self-learn the working teenager slang without much additional overhead.

        I’m not trying to create fear, it is just very obvious how much easier and successful it becomes to operate those scams now. I’m waiting for the day a company data leak turned into a giant money loss, because they later got scammed based on the leaked email exchange. Sure that’s possible already, but why try to pick which of the employees you can use, and what kind of exchange and subject is normal, when you can totally automate it. If “hey boss, please send money to XX for that invoice” already works well enough, imagine what this could do.

        1. 6

          Most scams targeted against my nationality are very easily spotted, simply because their German is that bad.

          The point often raised in discussions of this is that scammers already have the ability to send at least the first message or two written in a clear, fluent form of the target’s preferred language. But they choose not to, because sending a message in bad German (to you) or bad English (to me) acts as a filter. You spot the problem and immediately ignore it, so only people who are fooled enough/greedy enough/unaware enough to miss or overlook the problem will respond, which means that the later stages of the scam have a higher proportion of people who will miss or overlook all the other warning signs and go all the way through to giving up money.

          1. 6

            The article specifically addresses this, and one of its core predictions is that LLMs will reduce this reliance on early filtering: They will make stringing marks along much cheaper, so there’s no need to concentrate on only those that have low drop-out rate later in the process.

          2. 1

            That’s a good counterpoint. I still think that with better technology, you don’t need to filter out so much. So better language becomes interesting.

            Edit: And as I’ve just seen simonw’s post: Yeah, romance scams and well worded disinformation (spambots) which spread fears and made up stuff are a thing.

          3. 1

            Thanks for this explanation, I understand now. Was wondering why the typos were intentional.

        2. 1

          Most scams targeted against my nationality are very easily spotted, simply because their German is that bad. Is the assumption here that good German = a good scam?

          Those LLMs are literally made to spew out text. And based on my little experience with them, they would make a really good scam writer in native German.

          Uh, based on what data? You’ve had experience with AI writing good scams in the past? We’re all speculating the effectiveness of this.

          They can easily hold a conversation without getting weird, or running out of interest because it takes too long to get you. If you set those up correctly,

          “If you set those up correctly” is a huuuuge undertaking in itself. It’s like saying “if you scam correctly”!

          you can operate a giant pipeline of scammy bots.

          I think there’s a lot of other skills required to set up this pipeline. Like technical skills to create the pipeline in the first place.

          Who cares if it takes half a year to befriend you, just run enough of them in parallel, you will win eventually.

          Sure, given everything is setup perfectly and well, the scam does scam… Again, all speculation.

          Also the CodeGPTs make it even easier to spew out fake websites (relevant for good scam mails) that operate well enough to fool you. It’s not about being perfect, it is about making it even easier to find the few required needles in the haystack, that actually give you money.

          Uh, phishing websites have been 1:1 for a looooooong time. This is not anything new. There are even programs now that can do it without any effort for awhile.

          You know the most successful scam here currently ? An SMS with “Hello I’m your daughter, I lost my phone, can you text me on whatsapp to this number?”

          I think there’s more to the scam here than just that.

          What if you train those bots to be good enough to look like a typical teenager in problems ? Heck you can even let this self-learn the working teenager slang without much additional overhead.

          Show me a scam that involves typical teenager problems from the start to the end and then I can evaluate and answer this better.

          I’m not trying to create fear, it is just very obvious how much easier and successful it becomes to operate those scams now. I’m waiting for the day a company data leak turned into a giant money loss, because they later got scammed based on the leaked email exchange. Sure that’s possible already, but why try to pick which of the employees you can use, and what kind of exchange and subject is normal, when you can totally automate it. If “hey boss, please send money to XX for that invoice” already works well enough, imagine what this could do.

          It’s obvious that writing proper english or german or X language is easier with LLMs, yes. But there are scammers who are english or german to start with for a long time now.

      2. 1

        Okay, random made up example: without ChatGPT 10% of the population fell for a scam. With ChatGPT, it is now 35%. The more “believable” the scam is, the higher chances of falling for it. Of course, there will be some inflection point where people become smarter, but in the near future it seems scammers are going to make some real money!