1. 9
  1.  

  2. 1

    I had a somewhat similar idea. A friend of mine who does NLP told me about something called embedding, in which words were mapped to some kind of representation (a vector) such that similar words had similar representations. In fact, even pairs of words would be related; you can (in some cases) do something like boy+(queen-king) and get an encoding closest to girl. Then, the thought was to distribute colors in such a way that words with more similar encodings would have similar colors.

    Of course, this doesn’t actually pick the “semantic” meaning of any of the words, but I thought it was a similar enough idea to mention it here.

    1. 3

      The thing about word embedding is that its vectors have a big dimension; you’d have to do something like principal component analysis and project the embedding to 3D space to get a color. You might not get good results with that.