1. 25
  1.  

  2. 2

    A great presentation overall, but I am not sure about “human level AGI is 10 years away is ridiculous” part (page 6). As I understand, 10 years timeline is working assumption for many people working at DeepMind and OpenAI.

    The issue is orthogonal since even human level AGI won’t work that well for predicting social outcomes. It’s not like human interview is working that well.

    1. 3

      From the presentation:

      Note that AI experts have a more modest estimate that Artificial General Intelligence or Strong AI is about 50 years away, but history tells us that even experts tend to be wildly optimistic about AI predictions.

      I don’t know whether the DeepMind/OpenAI people fall into the fold of “AI experts” though.

      1. 2

        That’s really “citation needed”. AI Timeline Surveys is the most comprehensive dataset I know of, and except 1 ouf 13, none puts it past 2050, which is ~30 years away.

        1. 2

          From a skim of that article, or seemed that most of the optimistic people were AGI folk and the pessimistic people were CS and AI folk. I know Rodney Brooks doesn’t think it will happen in our lifetimes.

          1. 1

            That’s still longer than 10 years though.

            I’d say we’d be well on our way when the majority of legal drudge-work (writing and parsing contracts) are done by computers. This will threaten a huge white-collar constituency though, so expect significant pushback legally…

            1. 2

              It’s already there. It will be a gradual change, so I don’t expect much pushback. See https://www.lawgeex.com/resources/aivslawyer/ for example.

              1. 2

                It looks like this is “AI” is essentially just comparing if an NDA deviates too much from the standard NDA clauses. I wonder how well the “machine-learning and deep learning technologies” would compare against comparatively simple statistical analysis (as we use for spam detection and whatnot) or even a simplistic diff-based script?

                Maybe I’m oversimplifying things? Computers are clearly better than humans at certain types of tasks, and always have been. I think more convincing evidence of “AI” is needed than this simple infographic.

                This is probably a much better metric of AI/ML effectiveness in general: “how much better does it do than the non-AI/ML solution?”, rather than “how much better does it do than humans?”

                1. 1

                  In my experience, about any NLP task does greatly benefit from at least a bit of deep learning. Even if you operate with classical NLP methods, you can improve, say, parsing accuracy (or really anything) with help of deep learning.

                  1. 1

                    I’ve only applied NLP once: extracting ingredients from recipe descriptions (e.g. [{ingredient: "chickpeas", quantity: "one can"}, {..}]), and I found that ~500 lines of Go code which essentially does some string processing was significantly more accurate than ML/deep learning.

                    I freely admit my inexperience in this field, and perhaps some more experienced people will get better results here, but there are two existing projects (1, 2) and my code seems to give much better results.

                2. 1

                  Thanks for this link, I didn’t know it’s already that far ahead. It’s a bit weird it’s an infographic but hey.

                  Anyway, I don’t want to get too far into the weeds here. I do agree that we need to have an ethical discussion on the claims of AI when it comes to prediction of social outcomes.

                  1. 3

                    I actually disagree with that. Currently the main problem with predicting social outcomes is that it doesn’t work. If it starts to work, I think that is the time to discuss ethics. Until then, “don’t use snake oil” is enough, there is no ethical dimension to it.