1. 6
  1.  

    1. 11

      Posted this due to this gem of a statement and recent discussions both of how to combat AI scrapers and the in-band signaling attacks that prompt injection opens up:

      To catch attempts at subverting Operator with jailbreaks and prompt injections, which might hypothetically be embedded in websites that the AI model browses, OpenAI says it has implemented real-time moderation and detection systems. OpenAI reports the system recognized all but one case of prompt injection attempts during an early internal red-teaming session.

      Just waiting for big websites to start happy-pathing the AI’s interaction with it, not the human’s.

      1. 3

        that’s called an api

        1. 3

          The system performs tasks by viewing and interacting with on-screen elements like buttons and text fields similar to how a human would.

          It isn’t looking for APIs, it is trying to navigate like a normal user. So I imagine people will start to figure out how to, for example, get the bot to navigate through a referral link. Interestingly, this appears to be all visual-based, not scraping the content, so just hiding the prompt injection in HTML comments won’t suffice.

        2. 1

          Also, what do they consider prompt injection here? How much information disclosure about the previous step on another website they consider OK?

        3. 3

          Browser-in-browser? Oh sure let’s give OpeAI full access to personal data, cookies passwords accessibility etc.

          1. 3

            Thinking more about this overnight, this feels like the natural way for OpenAI to get beyond robots.txt denials and paywalls: become the user agent.

            1. 2

              I’m the operator with my pocket calculator.

              1. 2

                let’s not post press releases please. the thing doesn’t work

                https://bsky.app/profile/edzitron.com/post/3lghcgkdhws2z

              🇬🇧 The UK geoblock is lifted, hopefully permanently.