It is so refreshing to see an accessibility article where the author acknowledges that the correct solution is to fix bugs in 7 pieces of software, rather than expecting everyone who has ever typed anything into a computer to change how they act.
Except Unicode latin numerals aren’t exactly in common usage, so it’s not clear how this would affect “everyone who has ever typed anything into a computer”
This is just an example of a bigger pattern. Similar problems affect math symbols that some people use for fake bold/italic, or even use of multiple emoji.
While it may be preferable to re-use Latin letters, it leads to ambiguity which can be confusing for a screen-reader.
Screen readers already have to solve the problem of figuring out the word is pronounced from context. That might be because the word is a homograph (a lead weight instead of to lead, for example), or because one word is emphasized over the others.
Latin numerals are an example of that general case: figuring out pronunciation from context.
Latin unicode numbers (which, frankly, nobody in the Western world uses) are a case that will require hardcoded special handling in every screen reader. Which will not solve a problem for any other unicode symbol or sequence of characters, like the way flags are handled in Unicode, for example.
To me it seems vastly preferable to re-use latin letters over Unicode symbols, except for the few cases (like the non-LTR Asian languages mentioned in the document) where using the Unicode symbols is unavoidable.
It is so refreshing to see an accessibility article where the author acknowledges that the correct solution is to fix bugs in 7 pieces of software, rather than expecting everyone who has ever typed anything into a computer to change how they act.
Except Unicode latin numerals aren’t exactly in common usage, so it’s not clear how this would affect “everyone who has ever typed anything into a computer”
This is just an example of a bigger pattern. Similar problems affect math symbols that some people use for fake bold/italic, or even use of multiple emoji.
Screen readers already have to solve the problem of figuring out the word is pronounced from context. That might be because the word is a homograph (a lead weight instead of to lead, for example), or because one word is emphasized over the others.
Latin numerals are an example of that general case: figuring out pronunciation from context.
Latin unicode numbers (which, frankly, nobody in the Western world uses) are a case that will require hardcoded special handling in every screen reader. Which will not solve a problem for any other unicode symbol or sequence of characters, like the way flags are handled in Unicode, for example.
To me it seems vastly preferable to re-use latin letters over Unicode symbols, except for the few cases (like the non-LTR Asian languages mentioned in the document) where using the Unicode symbols is unavoidable.