I don’t mind some of these things on AI, but why are they always so darn long? I think it’s like 10,000 words or something. I kind of scrolled to the end and oh well. Deep fakes are most likely to kill us. I think the summarization might have really helped this one.
I’ll be honest, I dumped this one straight into my Claude summary project even before I saw your comment.
I won’t post the summary, but the custom project instructions I use at the moment are:
You summarize the pasted in text
Start with a overall summary in a single paragraph
Then show a bullet pointed list of the most interesting illustrative quotes from the piece
Then a bullet point list of the most unusual ideas
Finally provide a longer summary that covers points not included already
I don’t think there’s nearly enough discussion about there about how to craft a good summary prompt. Mine here works well enough but I bet there are dozens of ways to improve it.
Does the model really “use” the highly subjective qualifiers “most interesting” and “most unusual”? What is your reasoning for including those? Have you tested results of the prompt with and without and compared amounts of “interestingness” and “unusualness” being highlighted?
Not Simon, but anecdotally, yes. Seeding the LLM via explicit instructions or implicitly through certain prompts can radically alter the response. For example, I’ve taken to prefacing questions with “if you aren’t confident in an answer, I’d prefer that you do not give one” and (again, anecdotally) it makes the LLM less likely to hallucinate false-positive responses. Using certain phrases; the royal ‘one’, ‘whom’, and grammatical constructs that are more unusual outside of technical discussions also tends to elicit more specific answers (you could write a paper on such biases, but it’s not something I care to dive into here).
The thing I’ve noticed most is that the LLM is not a human and does not inhabit a fixed persona. It sees the quality of the response as similar-in-kind to the form of the response. Questions that are structured in a more specific manner are more likely to elicit a similar response in kind. Ask it to provide more accurate information, and it will: because it does not understand accuracy as a semantic property, only an abstract syntactic one.
The “most unusual” thing definitely works. It’s my favorite part of the prompt, because it highlights the things that don’t fit with whatever the model’s idea of “widely assumed already” might be.
I haven’t done a formal evaluation of it, but I’ve run hundreds of ad-hoc documents through that prompt now and I often find the “unusual ideas” section is the thing that provides me the most value.
As for “most interesting illustrative quotes” I think it’s the “illustrative quotes” piece that’s doing the heavy lifting there - my goal is to jump straight to the one or two sentences that best illustrate the overall point that the author is trying to convey. Anecdotally it seems to work well.
I always like to ask for quotes because I can very quickly fact check them against hallucinations by searching for them in the source text.
Kinda tongue in cheek but here’s a summary from Gemini:
AI’s impact on work raises concerns about job displacement, particularly for knowledge workers, due to automation. However, AI also promises to augment human capabilities, changing the nature of work itself. This necessitates adaptation, new skills, and addressing ethical concerns like inequality and potential misuse. The rise of AI-operated firms and a potential deskilling crisis add further complexity. Ultimately, successful integration of AI requires proactive adaptation, ethical consideration, and a focus on equitable outcomes.
I don’t mind some of these things on AI, but why are they always so darn long? I think it’s like 10,000 words or something. I kind of scrolled to the end and oh well. Deep fakes are most likely to kill us. I think the summarization might have really helped this one.
I’ll be honest, I dumped this one straight into my Claude summary project even before I saw your comment.
I won’t post the summary, but the custom project instructions I use at the moment are:
I don’t think there’s nearly enough discussion about there about how to craft a good summary prompt. Mine here works well enough but I bet there are dozens of ways to improve it.
I use Perplexity for this and shared my prompts: https://kyefox.com/using-perplexity-ais-spaces-as-a-life-raft-in-an-age-of-ai-slop/
Does the model really “use” the highly subjective qualifiers “most interesting” and “most unusual”? What is your reasoning for including those? Have you tested results of the prompt with and without and compared amounts of “interestingness” and “unusualness” being highlighted?
Not Simon, but anecdotally, yes. Seeding the LLM via explicit instructions or implicitly through certain prompts can radically alter the response. For example, I’ve taken to prefacing questions with “if you aren’t confident in an answer, I’d prefer that you do not give one” and (again, anecdotally) it makes the LLM less likely to hallucinate false-positive responses. Using certain phrases; the royal ‘one’, ‘whom’, and grammatical constructs that are more unusual outside of technical discussions also tends to elicit more specific answers (you could write a paper on such biases, but it’s not something I care to dive into here).
The thing I’ve noticed most is that the LLM is not a human and does not inhabit a fixed persona. It sees the quality of the response as similar-in-kind to the form of the response. Questions that are structured in a more specific manner are more likely to elicit a similar response in kind. Ask it to provide more accurate information, and it will: because it does not understand accuracy as a semantic property, only an abstract syntactic one.
Sure, my question was more about the specific terms Simon’s prompt used: why the subjective terms “interesting” and “unusual”?
[Comment removed by author]
The “most unusual” thing definitely works. It’s my favorite part of the prompt, because it highlights the things that don’t fit with whatever the model’s idea of “widely assumed already” might be.
I haven’t done a formal evaluation of it, but I’ve run hundreds of ad-hoc documents through that prompt now and I often find the “unusual ideas” section is the thing that provides me the most value.
As for “most interesting illustrative quotes” I think it’s the “illustrative quotes” piece that’s doing the heavy lifting there - my goal is to jump straight to the one or two sentences that best illustrate the overall point that the author is trying to convey. Anecdotally it seems to work well.
I always like to ask for quotes because I can very quickly fact check them against hallucinations by searching for them in the source text.
Kinda tongue in cheek but here’s a summary from Gemini:
[Comment removed by author]