[Comment removed by author]
Why do you think that Haskell’s type system has slowed the rate at which String is replaced by Text? I imagine that in a dynamically typed language with both of those as semantic options it’d be equally hard to just immediately upgrade people. You could change the meaning of literal syntax, but that would involve introducing subtle semantics bugs into every use of a string.
I think part of the point was the Haskell-the-language was unwilling to make such big changes to the semantics of the language and instead opted for a library solution. That solution is slowly being integrated into more and more core systems as it matures and the community accepts it.
I’m not sure that I believe having a looser typing scheme would have improved this process.
I guess I don’t see how that’s Haskell specific. Python had a similar-ish problem with their lack of unicode support which has only been partially resolved through a lot of pain in the Python 3 upgrade pathway. A similar thing would have been possible in Haskell, but unlikely given the committee at the helm of the language spec. Upgrading to new semantics of such a core type as String is going to be incredibly painful in any language. I’d even argue that in Haskell the types help make it less of a burden by clearly marking where these upgrades have occurred.
I’ve always personally found OverloadedStrings to be an elegant solution to the problem of finding a very minimal change to the core Haskell semantics which was mostly compatible with H98 and allowed for “upgraded” “String” semantics. It also elegantly creates a pathway for both Text and ByteString to have nice literal representations. It’s definitely a hack, but I quite like it.
This is true. Also I wish editors like IntelliJ were better at figuring out types so they provide better useful hinting. A lot of this is only available at runtime unless you use some kind of commenting convention, which actually does get you more in static analysis tools.