This doesn’t seem crazy to me. I think it is sometimes a good idea to censor somebodies internet content, especially children and other easily impressionable people. You could use the same infrastructure to show warnings rather than censoring, too.
E.g. I would like to be able to prevent a young person under my care from viewing fascist or extreme right wing content. Or I might want to restrict access to pornographic materials in a public educational setting.
Censorship is no substitute for talking to folks about these things, but I think I could have an open conversation with a kid where I got their consent for enabling some censorship of their internet feeds.
I think the crazy part is the huge amount of customisation it allowed, including nested DSLs, and Microsoft persevering with it long into newer IE despite no market take up. In any case, I think crazy meant that all of this was a neat undiscovered/unexplored technical ecosystem. Not a comment on parental controls.
It’s actually a lot more sensible than alternative proposals, it just runs afoul of the Metacrap problem. Remember when webmasters used to conscientiously and accurately tag all of their content? Neither do I.
Nice writeup on how metadata is always wrong. My daily interaction with this phenomenon is that I don’t think I have ever, but ever, seen an e-commerce website with a taxonomy for products, which actually has a way to filter through that taxonomy, and that is actually coherent, and lists products which are actually consistently correctly classified according to it. The first is common enough, the second, rare, the third, unheard of.
It did its job perfectly: staved off interference by politicians by pointing them at a big complicated mess of technical measures that no one could understand, while not actually having any effect because no one used it.
Tangent: I kinda like the idea of a first-class content warning element. Imagine if you could mark up specific elements kinda like the <summary>/<details> element. You could use it completely hide for parental, educational, or mental health needs or press a button to show it. You see implementations like this on places such as Mastodon, but it’s standardized or set up in a way that I change a setting in my browser or the network to hide certain topics.
<cw>
<tags>
<li>bigotry</li>
</tags>
<article>
<h1>US State Passes New Law Banning Something Necessary for Minority Community</h1>
<p>…</p>
</article>
</cw>
Research suggests content warnings do not have beneficial effect on survivors of trauma and in fact have the potential to amplify distress caused by the content that was warned about - it is effectively pseudoscientific to provide content warnings.
Some research does suggest that trigger warnings may be able to reduce distress — but only very, very marginally. A 2018 paper, though, concluded that “trigger warnings” were not only largely ineffective but also, in some cases, amplified the anxiety people reported in response to distressing material. Another study from 2021 noted that trigger warnings can ultimately prolong the adverse impacts of recalling painful memories; perhaps, because they tell our brains to expect something negative and, in doing so, worsen the distress we feel.
Both of the linked studies seem to examine the case where people receive the warning and then consume the content. Isn’t that missing the point (let people avoid distressing content)?
The warnings are so vague and applied to so many things I expect there’s a lot of false positives in them; avoiding an entire article in a subject you’re interested in just because the author slapped a keyword at the top that may or may not reflect something that’d actually bother you is kinda throwing the baby out with the bathwater. I expect there’s more value in just individual sections marked, so like “the next four paragraphs go into detail ” i would guess - i don’t actually know - would be more likely to have people actually choose to skip that section since they can still read the other parts and have more of an idea of what they’re skipping from the surrounding context.
Indeed, I think most content warnings would probably be better replaced by a brief summary instead of just a couple keywords. And brief summaries might be valuable to all readers anyway.
I didn’t know that was the point, it makes sense to me then (though I recall a meta-analysis from last year that suggested those the content warnings are for are more likely to consume the content when warned vs. not-warned but I didn’t bookmark it as it was a preprint and not peer-reviewed, can look it up if you’d like).
Yes, you could certainly do it this way similar to Mastodon, but then it’s ad-hoc and it can’t be affected globally as easily from a browser setting unless a pattern truly is standardized. I suppose you could slap the expanded property on by default and the user agent or add-on then hides iff the users requests or do you hide everything for the average user and they need to click a lot to see content?
The UK should just mandate that adult websites add in this metadata rather than whatever it is they’re working on now.
This doesn’t seem crazy to me. I think it is sometimes a good idea to censor somebodies internet content, especially children and other easily impressionable people. You could use the same infrastructure to show warnings rather than censoring, too.
E.g. I would like to be able to prevent a young person under my care from viewing fascist or extreme right wing content. Or I might want to restrict access to pornographic materials in a public educational setting.
Censorship is no substitute for talking to folks about these things, but I think I could have an open conversation with a kid where I got their consent for enabling some censorship of their internet feeds.
I think the crazy part is the huge amount of customisation it allowed, including nested DSLs, and Microsoft persevering with it long into newer IE despite no market take up. In any case, I think crazy meant that all of this was a neat undiscovered/unexplored technical ecosystem. Not a comment on parental controls.
This is what I was referring to, yeah.
It’s actually a lot more sensible than alternative proposals, it just runs afoul of the Metacrap problem. Remember when webmasters used to conscientiously and accurately tag all of their content? Neither do I.
Nice writeup on how metadata is always wrong. My daily interaction with this phenomenon is that I don’t think I have ever, but ever, seen an e-commerce website with a taxonomy for products, which actually has a way to filter through that taxonomy, and that is actually coherent, and lists products which are actually consistently correctly classified according to it. The first is common enough, the second, rare, the third, unheard of.
Taxonomy is complicated.
It did its job perfectly: staved off interference by politicians by pointing them at a big complicated mess of technical measures that no one could understand, while not actually having any effect because no one used it.
Tangent: I kinda like the idea of a first-class content warning element. Imagine if you could mark up specific elements kinda like the
<summary>
/<details>
element. You could use it completely hide for parental, educational, or mental health needs or press a button to show it. You see implementations like this on places such as Mastodon, but it’s standardized or set up in a way that I change a setting in my browser or the network to hide certain topics.Research suggests content warnings do not have beneficial effect on survivors of trauma and in fact have the potential to amplify distress caused by the content that was warned about - it is effectively pseudoscientific to provide content warnings.
Why Usage of Trigger Warnings Persist Despite Research Suggesting They Might Be Counterproductive
Both of the linked studies seem to examine the case where people receive the warning and then consume the content. Isn’t that missing the point (let people avoid distressing content)?
The warnings are so vague and applied to so many things I expect there’s a lot of false positives in them; avoiding an entire article in a subject you’re interested in just because the author slapped a keyword at the top that may or may not reflect something that’d actually bother you is kinda throwing the baby out with the bathwater. I expect there’s more value in just individual sections marked, so like “the next four paragraphs go into detail ” i would guess - i don’t actually know - would be more likely to have people actually choose to skip that section since they can still read the other parts and have more of an idea of what they’re skipping from the surrounding context.
Indeed, I think most content warnings would probably be better replaced by a brief summary instead of just a couple keywords. And brief summaries might be valuable to all readers anyway.
I didn’t know that was the point, it makes sense to me then (though I recall a meta-analysis from last year that suggested those the content warnings are for are more likely to consume the content when warned vs. not-warned but I didn’t bookmark it as it was a preprint and not peer-reviewed, can look it up if you’d like).
If true, avoiding triggers still just one use case. With a tag like nudity, maybe you choose to not open it at work.
You could do this using the existing details element.
I even added some microformats markup to make it machine-readable
Small nit: this is definitely not an h-card. Did you mean h-entry? And probably p-name instead of p-title
Fixed, thanks. Not sure how that happened
Yes, you could certainly do it this way similar to Mastodon, but then it’s ad-hoc and it can’t be affected globally as easily from a browser setting unless a pattern truly is standardized. I suppose you could slap the expanded property on by default and the user agent or add-on then hides iff the users requests or do you hide everything for the average user and they need to click a lot to see content?