1. 16
  1.  

  2. 17

    Really interesting piece. Interesting also that he calls up Bloomberg to talk about it.

    This type of comment, of which we hear a lot at the moment, about deep-learning/recursive neural-net AI:

    Sometimes the Acura seemed to lock on to the car in front of it, or take cues around a curve from a neighboring car. Hotz hadn’t programmed any of these behaviors into the vehicle. He can’t really explain all the reasons it does what it does. It’s started making decisions on its own.

    seems to be in marked/stark contrast to this (posted on lobste.rs a month or so ago):

    Sussman […] explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. “I’m not interested in that. I want software that’s accountable […] that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead.” He then said something […] along the lines of, “If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court.”

    the last sentence of which might sound crazy but actually makes a lot more sense to me in context than “yeah we trained this thing and it seemed to do what we want in all our tests, but wow, then it drove that school bus off the cliff, who knew it was going to do that!” And there was me thinking behaviourism was just a kind of nutty overhang from the ‘50s.

    1. 2

      The “hadn’t programmed any of these behaviors” aspect kinda reminded me of the puff piece on Viv.ai from a few months ago: http://www.esquire.com/lifestyle/a34630/viv-artificial-intelligence-0515/ .

      Don’t have any AI knowledge, but I wonder at what point does one consider taking their car for a ‘training drive’ or establishing preferences also a form of programming.

      1. 1

        I’d missed that Sussman link the first time around–fascinating.

        For us, it’s a regulatory requirement that we be able to explain decisions which our software makes when asked (healthcare) and this has come up when we’ve considered using ML/training-based approaches for certain narrow pattern-matching problems.

        1. 1

          Has it affected the decisions as to whether you used it or not? What were the outcomes?

          1. 1

            Our existing implementation is rule-based, and the explainability issue has largely stopped us from pursuing an ML replacement.

      2. 10

        His philosophizing at the end of the article seems pretty disconcerting to me. Not sure I agree that a world where everyone lives in a virtual reality is such a hot idea.

        1. 7

          “I know everything there is to know”, Geohot 2015.

          I am reminded of the Socratic paradox, which is roughly mirrored in the bible and on many fortune cookies. The wise man knows he knows nothing, the fool thinks he knows all.

          1. 5

            “laser based radar”. Should this tell us all we need to know about the journalist?

            1. 5

              Kudos for his work to build this system, but I’m not convinced until I’ve seen this car operate in extraordinary circumstances (accidents, congestions, weather (lasers don’t like rain scattering their beam), …). I can think of a million ways this system could fail in. I’m certain this should not stop you from continuing whatever you are working on, but from just listening to his rant at the end I get the impression this guy is too full of himself, maybe even more than good ol' Icarus.

              1. 5

                It looks like he was teaching himself MNIST autoencoders 5 months ago: https://github.com/geohot/nnweights

                It’s definitely possible that self driving is achievable with simple neural networks, but I feel it begs the quote from Andrew Ng:

                One thing about speech recognition: most people don’t understand the difference between 95 and 99 percent accurate. Ninety-five percent means you get one-in-20 words wrong. That’s just annoying, it’s painful to go back and correct it on your cell phone.

                Ninety-nine percent is game changing. If there’s 99 percent, it becomes reliable. It just works and you use it all the time. So this is not just a four percent incremental improvement, this is the difference between people rarely using it and people using it all the time

                I have a feeling comma.ai is still in the 95 percent phase or much less. This probably isn’t even close to what Google or Baidu is building, not to mention they have a teams of AI experts who have been working with these algorithms for much longer.

                It’s super impressive that he’s got this far by himself though. Should be interesting to see what he does next.

                1. 3

                  Looks like he’s been keeping himself busy since settling with Sony. He’s clearly a smart guy, although I don’t know that he’ll still believe he knows everything (even in the field of AI) in 10 years.