I wish people would look at the problems with their current systems before jumping on the deep learning bandwagon. The biggest problem with autocorrect on iOS is that they use letters as the input and treat pressing a button as binary. Their on-screen keyboard doesn’t account for or learn about parallax. I usually type correctly with my left thumb on the iPad but my right one often hits the key one to the right of the one I think I hit, this triggers the second problem: they treat punctuation was a strong indication of a word ending. Around 70% of my typos in the iPad (which I’m using now) come from hitting comma instead of m. This makes a new word and transforms the half word before it into something surprising. Correcting this is painful because, unlike Android, backspace does not undo autocorrections. The next largest cause of typos is pulling down slightly on a key, particularly on the top row, so I hit 8 instead of I. I have made errors of both kinds typing this message and they have been the only errors that have interrupted flow.
The root cause of both of these is that their prediction model does not appear to take keyboard layout into account or have any signal stronger than the stream of characters from the keyboard. If they modelled the keys on the keyboard as peaks in a probability space, with gaps as places where there’s an equal probability of hitting the two keys and the edges of the visible buttons as places where there’s a high probability of hitting one key and a lower probability of the adjacent one. Similarly, model drags and small finger movements that don’t quite trigger drags as a probability of meaning either of the two versions.
With that keyboard interface, a fairly simple predictor could significantly improve my typing speed. Without needing any buzzwords.
Oh, and please fire whichever idiot decided to put the emoji key in the place that’s easiest to hit on the on-screen keyboard of a ‘Pro’ device.
I wish people would look at the problems with their current systems before jumping on the deep learning bandwagon. The biggest problem with autocorrect on iOS is that they use letters as the input and treat pressing a button as binary. Their on-screen keyboard doesn’t account for or learn about parallax. I usually type correctly with my left thumb on the iPad but my right one often hits the key one to the right of the one I think I hit, this triggers the second problem: they treat punctuation was a strong indication of a word ending. Around 70% of my typos in the iPad (which I’m using now) come from hitting comma instead of m. This makes a new word and transforms the half word before it into something surprising. Correcting this is painful because, unlike Android, backspace does not undo autocorrections. The next largest cause of typos is pulling down slightly on a key, particularly on the top row, so I hit 8 instead of I. I have made errors of both kinds typing this message and they have been the only errors that have interrupted flow.
The root cause of both of these is that their prediction model does not appear to take keyboard layout into account or have any signal stronger than the stream of characters from the keyboard. If they modelled the keys on the keyboard as peaks in a probability space, with gaps as places where there’s an equal probability of hitting the two keys and the edges of the visible buttons as places where there’s a high probability of hitting one key and a lower probability of the adjacent one. Similarly, model drags and small finger movements that don’t quite trigger drags as a probability of meaning either of the two versions.
With that keyboard interface, a fairly simple predictor could significantly improve my typing speed. Without needing any buzzwords.
Oh, and please fire whichever idiot decided to put the emoji key in the place that’s easiest to hit on the on-screen keyboard of a ‘Pro’ device.