1. 50

A more balanced review of what the public knows about the situation with Uber and the pedestrian.

    1. 26

      I’ve talked to some people in and close to this industry and it feels like we’re a good 15 years away from autonomous vehicles. The other major issue we’re not addressing is that these cars cannot be closed source like they are now. At a minimum, the industry needs to share with each other and be using the same software or same algorithms. We can’t enter a world where Audi claims their autonomous software is better than Nissan’s in adverts.

      People need to realize they won’t be able to own these cars or modify them in any way if they ever do come to market. The safety risks would be too great. If the cars are all on the same network, one security failure could mean a hacker could kill thousands of people at once.

      I really think the current spending on this is a huge waste of money, especially in America when tax money given to companies to subsidize research could be used to get back the trains system we lost and move cities back inward like they were in the earlier 1900s. I’ve written about this before:

      http://penguindreams.org/blog/self-driving-cars-will-not-solve-the-transportation-problem/

      1. 20

        If the cars are all on the same network

        Any company that is connecting these cars to the Internet is being criminally negligent.

        I say that as an infosec person who worked on self-driving cars.

        1. 3
        2. 2

          They have to be able to communicate though to tell other cars where they intend to go or if there is danger ahead.

          1. 7

            It’s called blinkers and hazard lights.

            1. 9

              That’s just networking with a lot of noise in the signal.

              1. 7

                Networking that doesn’t represent a national security threat, and nothing that a self-driving car shouldn’t already be designed to handle.

                1. 3

                  What happens when someone discovers a set of blinker indications that can cause the car software to malfunction?

            2. 1

              Serious question (given that you’ve worked on self-driving cars): is computer vision advanced enough today to be able to reliably and consistently detect the difference between blinkers and hazards for all car models on the roads today?

              1. 2

                As often is the case, some teams will definitely be able to do it, and some teams won’t.

                Cities and States should use it as part of a benchmark to determine which self-driving cars are allowed on the road, in exactly the same way that humans must pass a test before they’re allowed a drivers license.

                The test for self-driving cars should be harder than the test for humans, not easier.

          2. 2

            They could use an entirely separate cell network that isn’t connected to the Internet. All Internet enable devices, like the center console, could use the standard cell network and they have a read-only bus between the two for sensor data like speed, oil pressure, etc.

      2. 11

        The other major issue we’re not addressing is that these cars cannot be closed source like they are now.

        I strongly agree with this. I believe autonomous vehicles are the most important advancement in automotive safety since the seatbelt. Can you imagine if Volvo had kept a patent on the seatbelt?

        The autonomous vehicle business shouldn’t be about whose car drives the best, it should be about who makes the better vehicles. Can you imagine the ads otherwise? “Our vehicles kill 10% fewer people than our competitors!” Ew.

      3. 2

        I don’t buy your initial claims.

        When you said “we’re 15 years away from autonomous vehicles”, what do you mean exactly? That it’ll be at least 15 years before the general public can ride in them? Waymo claims this will happen in Pheonix this year: https://amp.azcentral.com/amp/1078466001 That the majority of vehicles on US roads will be autonomous? Yeah, that’ll definitely take over 15 years!

        We can have a common/standard set of rigorous tests that all companies need to pass but we don’t need them to literally all use the same exact code. We don’t do that for aeroplanes or elevators either. And the vanguard of autonomous vehicles are large corporations that aren’t being funded by tax dollars.

        That said, I agree that it would be better to have more streetcars and other light rail in urban areas.

        1. 6

          It will be at least 15 years before fully autonomous vehicles are available for sale or unrestricted lease to the general public. (In fact, my estimate is more like twice that.) Phoenix is about the most optimal situation imaginable for an autonomous vehicle that’s not literally a closed test track. Those vehicles will be nowhere near equipped to deal with road conditions in, for example, a northeastern US winter, which is a prerequisite to public adoption, as opposed to tests which happen to involve the public.

          Also, it’s a safe bet this crash will push back everyone’s timelines even further.

          1. 1

            I think you are correct about sales to the public but a managed fleet that the public can use on demand in the southern half of the country and the west coast seems like it could happen within 15 years.

    2. 7

      Should be treated like an airliner crash: Investigation, lessons learned, improvements to make sure it’s not repeated.

        1. 2

          I don’t know what more we can ask for.

          Improvements will be made.

          No company wants this liability.

          1. 8

            I don’t know what more we can ask for.

            At least one human in jail.

            And if Uber cannot prove that it was the first time a test driver was distracted during drive, at least the whole board of directors of Uber in jail.

            1. 4

              At least one human in jail.

              It’s very likely that there will be a scapegoat or two.

              But I think this is probably good for the industry.

              I’m no historian, but I imagine that this is a little bit like when the first airplanes were invented. At first there were no rules. You just made and airplane and flew around.

              Until some bystander got hurt or killed. In those days, we were not such a litigious society, so most people probably said tough luck.

              But eventually we had passenger travel, and the government decided we needed rules and the FAA (or whatever came before it) was created. They make the rules.

              At first, air travel was not so safe. But after every accident we improved.

              And when there were accidents, there were liability lawsuits. If gross negligence could be proven, then maybe even some airline company executives went to jail???

              Even now, when there is human error and an airliner crashes, I don’t think anyone goes to jail?

              We are still in the early days.

            2. 1

              What does that solve?

              1. 9

                It has a net-positive social effect.

                1. Giustice.
                2. the U.S.A. would prove to their citizens that they hold the monopoly of the legitimate use of physical force: otherwise, if you accept that a company can kill, killers will all become entrepreneurs
                3. all the future boards of directors of any robotics company will take human safety very seriously and will continue to take it seriously every time a board of director go in jail
                4. the whole DataScience/AI industry will learn to sell just what they can explain (aka debug) and prove correct (which is much more than you think, actually!)
                5. the whole software industry will begin to take software quality as a serious topic
                6. ISIS won’t have a very good reason to infiltrate AI software companies in the U.S.A. …

                I think I could go on for a while…

                1. 1

                  if you accept that a company can kill, killers will all become entrepreneurs

                  It’s called a private milicia. They’ve been there before Uber and Google. 🙄

                  1. 3

                    Are you stating that in the U.S.A. a private militia has the right to kill people without questions from courts?

                    I really did not knew that!

                    Because, you know, some people says you should not require explanations from an AI!

                    And if a private militia can kill people with that same freedom… I can suddenly understand U.S.A. problems with guns!

                    1. 2

                      It’s the 2nd Amendment: final check against government corruption when all three branches fail to do their job. Given how divided the media keeps US, it will basically turn into a shooting gallery with each side taking on their media-designated enemies.

                      The only neutral scenario I could think of where it may apply is people taking out politicians that took bribes to pass laws that harmed consituents. And were immune to prosecution. People on both sides tend to look down on whoever takes bribes for laws. As in, it enforces integrity of essential system with everything else handled within the system.

                      Id still be afraid to see any use of 2nd Amendment play out, though. Will be a lot of collateral murder.

                      1. 0

                        @nickpsecurity I read your reply three times, looked at wikipedia and still I do not understand what you mean.

                        The monopoly of legittimate use of violence is given to states by their people.

                        No State is obliged to respond in courts about each single life it takes to preserve law.
                        That’s because the state itself represent the Giustice (on behalf of its people, in a democracy).

                        The state does not need to explain why it kill: the explainations are due for the people that reppresent the state (police, judges and so on..) to ensure they do not abuse the power the state give them.

                        Does the 2nd Amendment give the U.S.A. citizens the same right of the state?
                        That would explain @oz comment, but still it sound extremely strange.

                        For example, why killers do not always appeal to it when in court?

                        1. 3

                          Quick request: If you reply to someone, they get an email saying that your replied. If you use @ in front of name, they get another email saying they were “mentioned” in same thread. I suggest leaving out the @ when it’s the person you were replying to so they just get one email. I also leave it off if it’s another party if they’re already reading the thread.

                          Regarding 2nd Amendment, the wording of the Amendment was ambiguous leading to two interpretations:

                          1. It’s an individual’s right to bear arms to use in self-defense against all enemies. That might include people attacking them, corrupt politicians, or foreign invaders. Some of these organize into unofficial militias that are basically groups that share this belief in a specific locale. There’s over 200 of them.

                          2. It’s about a state-level, military organization governed by the laws of that state and controlled by its governor. That’s basically the Army and Air National Guard. These often also have police powers in a state, too.

                          There’s no consensus on the subject. No 1 is used to justify gun ownership. Presidents also used to shoot people on the streets in less-civil times. No 2 is implemented across the states, too. I’m in No 1 territory just because I doubt U.S. military personnel make a good check against U.S. military personnel: probably see each other like cousins in a big family. There are some court opinions from long ago suggesting No 1 is OK when three branches fail to do their job. Anyone trying it will be imprisoned for murder, though, after being villified by whatever side voted for that person. Generally, most just move to a state that runs things the way they like tolerating the government’s abuses.

                          The militias are doing nothing waiting for The Big Moment when the federal government does something so bad it justifies them going to war. We’ve had smaller moments over and over and over: Feds like so-called “fait accompli” strategy where they do a little bit of evil at a time building up power slowly with each move independently justified with media narrowly focusing on it in isolation. Like the boiling frog metaphor, the citizens tolerate more corruption that way with them not seeing bigger picture or slowly forgetting why certain things happened to begin with. The Big Moment won’t come because it already did over time. A worse situation will down the road. I found it illuminating to compare the abuses listed in Declaration of Independence that justified war on British rule against the abuses of current U.S. government. There’s too many similarities.

                          The militias haven’t done anything about anything, though. Mostly just drink, socialize, and sport shoot in the woods that I can tell. The folks that have shot politicians have usually been crazy or evil doing it for their own reasons. They’re really random. They definitely don’t help justify any legitimate use of 2nd Amendment when that happens: every shooting has people try to roll back No 1 on the list. Who knows what will happen in future but that’s the relevant background on the subject.

                          1. 2

                            Thanks nickpsecurity.
                            Sorry for the duplicated mails… my fault, but maybe Lobste.rs misses a DISTINCT clausole.

                            Your post gave me an interesting and deep historical perspective over a U.S.A. issue that I cannot really understand as an European.

                            This deeply improve my understanding, thanks!

                            Anyone trying it will be imprisoned for murder, though…

                            This is the point, I think: a State cannot allow anybody to kill without responding in court for murder. That’s just because otherwise it would loose the key of its own power: legitimacy over its use of violence.

                            This does not means that each person responding on court of a murder is guilty and will go to jail. Just that he has to prove that the death was not reconducible to his own actions.

                            So, in this case, Uber must prove that they had no way to prevent the death.

                            Eg they cannot test the car in roads closed to the public traffic, they had never observed another driver distracted at the driving seat before, that the car was correctly manutened, that the LIDAR system was tested to work at that speed and lighting conditions, that the various AI component had no bug and so on…

                            1. 2

                              Yeah, they should have to explain that stuff if they were to get tried for it. The case for many companies is they just get investigated and sued with their lawyers holding it off. Sometimes they loose a lot of money on it. Their next move is to do the minimum necessary to avoid a similar loss. This might achieve real, risk reduction. Or it will be a dodge with another disaster down the road.

                              Most of the time these are mechanical processes we understand really well. Self-driving cars aren’t. So, I have no idea what will happen just because a robust version of the concept hasn’t been demonstrated even by academics. They might even be able to use that as a defense: “we did all we could. Not even cutting-edge R&D was doing much better on correctness.” Of course, the LIDAR results vs the Grand Challenge I read about long ago makes me think there were some truly reckless acquisition and testing practices. Hopefully, lots of LIDAR experts can chip in testimony saying it’s total garbage to set some kind of baseline for what’s acceptable.

                              Seeing and responding to a big-ass object right in front of it should probably be in the baseline. ;)

    3. 5

      This a fascinating case. It’s very unfortunate that the cyclist had to die for it to come before us. However, had the car been driven by a human, nobody would be talking about it!

      That said, the law does not currently hold autonomous vehicles to a higher standard than human drivers, even though it probably could do so given the much greater perceptiveness of LIDAR. But is there any precedent for doing something like this (having a higher bar for autonomous technology than humans)?

      1. 13

        Autonomous technology is not an entity in law, and if we are lucky, it never will be. Legal entities designed or licensed the technology, and those are the ones the law finds responsible. This is similar to the argument that some tech companies have made that “it’s not us, it’s the algorithm.” The law does not care. It will find a responsible legal entity.

        This is a particularly tough thing for many of us in tech to understand.

        1. 25

          It’s hard for me to understand why people in tech find it so hard to understand. Someone wrote the algorithm. Even in ML systems where we have no real way of explaining its decision process, someone designed it the system, someone implemented it, and someone made the decision to deploy it in a given circumstance.

          1. 11

            Not only that, but one other huge aspect of things nobody is probably thinking about. This incident is going to probably start the ball rolling on certification and liability for software.

            Move fast and break things is probably not going to fly in the faces of too many deaths to autonomous cars. Even if they’re safer than humans, there is going to be repercussions.

            1. 8

              Even if they’re safer than humans, there is going to be repercussions.

              Even if they are safer than humans, a human must be held accountable of the deaths they will cause.

              1. 2

                Indeed, and I believe those humans will be the programmers.

                1. 4

                  Well… it depends.

                  When a bridge breaks down and kills people due to bad construction practices, do you put in jail the bricklayers?

                  And what about a free software that you get from me “without warranty”?

                  1. 4

                    No - but they do take the company that build the bridge to court.

                    1. 5

                      Indeed. The same would work for software.

                      At the end of the day, who is accountable for the company’s products is accountable for the deaths that such products cause.

                  2. 2

                    Somewhat relevant article that raised an interesting point RE:VW cheating emissions tests. I think we should ask ourselves if there is a meaningful difference between these two cases that would require us to shift responsibility.

                    1. 2

                      Very interesting read.

                      I agree that the AI experts’ troupe share a moral responsibility about this death, just like the developers at Volkswagen of America shared a moral responsibility about the fraud.

                      But, at the end of the day, software developers and statisticians were working for a company that is accountable for the whole artifact they sell. So the legal accountability must be assigned at the company’s board of directors/CEO/stock holders… whoever is accountable for the activities of the company.

                  3. 2

                    What I’m saying is this is a case where those “without warranty” provisions may be deemed invalid due to situations like this.

                2. 1

                  I don’t think it’ll ever be the programmers. It would be negligence either on the part of QA or management. Programmers just satisfy specs and pass QA standards.

          2. [Comment from banned user removed]

          3. 2

            It’s hard to take reponsability for something evolving in a such dynamic environment, with potentially used for billions of hours everyday, for the next X years. I mean, knowing that, you would expect to have a 99,99% of cases tested, but here it’s impossible.

            1. 1

              It’s expensive, not impossible.

              It’s a business cost and an entrepreneurial risk.

              If you can take the risks an pay the costs, that business it not for you.

      2. 4

        It’s only a higher bar if you look at it from the perspective of “some entity replacing a human.” If you look at it from the perspective of a tool created by a company, the focus should be ok whether there was negligence in the implementation of the system.

        It might be acceptable and understandable for the average human to not be able to react that fast. It would not be acceptable and understandable for the engineers on a self-driving car project to write a system that can’t detect an unobstructed object straight ahead, for the management to sign off on testing, etc.

    4. 3

      Perhaps a hardware override bypassing the computer driver to always brake in some situations makes sense? though maybe hard braking is not always the best avoidance strategy and that is why it isn’t done.

      edit:

      but most of these cars in testing still get software crashes which shut them down entirely on a fairly frequent basis.

      Yikes.