I know it’s easy to be in a constant state of rage at Uber, and this news makes it extremely easy to pile on. An innocent person died here, at the fault of a team of engineers attempting to something incredibly difficult. I know for sure that this will bring up (and has already, I’m sure) talking head discussions on ethics of AI, who will be charged (why/why not), and tons more litigation and law suits. But, let’s not forget to sympathize with the engineering team here, as well. This has to be the worst feeling ever, and it could have happened to any of us—it had to happen to someone.
My condolences to the innocent pedestrian’s family and friends. Also, my condolences to the team who will carry this loss on their sleeve for the rest of their lives.
It seems like you haven’t read the article very carefully.
You completely forgot to mention the operator behind the wheel - If anyone, that person will most likely be charged and, regardless of the verdict, carry it for the rest of their life.
… it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …
… I suspect preliminarily it appears that the Uber would likely not be at fault in this accident, either, …
First, the operator is the one person that can hardly be blamed. The idea that a car can drive itself and someone will step in when something goes wrong is fundamentally flawed. Engineers have known about the fact that this doesn’t work for many decades. Understanding what happens at the point of handoff and how long it takes is a fundamental part of aircraft safety and CRM. It takes humans time to asses a situation and step in to take control.
Second, police often blame victims in car crashes. That’s in part why so few ever get prosecuted and the situation doesn’t change. I’ll believe it when Uber releases video of what happened.
You completely forgot to mention the operator behind the wheel - If anyone, that person will most likely be charged and, regardless of the verdict, carry it for the rest of their life.
Presumably the operator is part of the engineering team, no? I’m not a District Attorney, or an attorney, or even a law enforcement officer. Therefore, I’m unable to comment on whether or not the operator will be charged, if it makes sense to charge this person, or if we’ll find that Uber put on the road a car that was not street legal, which contributed to it.
Please don’t assume I didn’t read carefully. I tried to choose my words carefully in order to not speculate on the details of an on-going investigation.
… it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …
Exactly. This makes the investigation all that more important. Maybe no one will be charged because investigators will rule it an accident based purely on the fact that, autonomous or not, it was unavoidable based on the pedestrian’s actions.
I think your second point raises an interesting issue. It may have been difficult for a human driver to see this person, but from the information given and all the pictures I’ve seen, it shouldn’t have been difficult for an autonomous driver to see them using different sensors (like depth or IR).
It shouldn’t have been speeding and it should have slowed down further or changed lanes when it saw that it was coming up on a pedestrian in the median.
This is the second incident I know of where an autonomous car has got into trouble, in part, by mimicking stupid human behavior. We have the technology to avoid things like this, and the standard for computer drivers need to be significantly higher than the standard for humans. The NTSB needs to get these things off the road until they’re properly tested.
The fault is actually in the driver, who was instructed to be alert and keep both hands at the wheel at all times. Uber should not have released this obviously and they should get shit for it but I think until there’s nobody behind the wheel the responsibility of any accident falls on the driver, just as it does with planes presently.
The fault lies with the people that put the driver there. It’s beyond comprehension that they would rely on a safety driver. We’ve known for decades that humans cannot effectively monitor a system that’s mostly reliable. The fact that this cannot be done goes back to Kibler (1965), was already understood by Bainbridge (1983), and by Molly & Parasuraman (1995) there was extensive research digging deep into why people are unable to do this and how to design environments where they can.
It is irresponsible of Uber/Waymo/GM and all of the manufacturers to put people in an impossible situation.
Apparently according to reports it required intervention roughly every mile. I do agree there should be laws against putting such a weak system on the road. It should be able to drive unassisted at least as well as a human driver before we put humans behind the wheel, but after that point the driver should be culpable for failing to pay attention. Especially if the driver were for example watching a feature length film in the drivers seat.
If a company knowingly puts you in an impossible situation where you cannot possibly do a task safely without injuring yourself or others they are generally liable, not you. Unless for example, you’re a professional engineer in which case you have a certain responsibility to inform yourself and say no. Those poor drivers don’t know the research behind visual attention, automation, and fatigue. It feels very unfair to prosecute them for doing their jobs, that they have been told they can do, to the best of their abilities, when they’ve been set up for failure.
Now, in retrospect, don’t you think that without such an antropomophic language selling “intellingense” and “learning” of machines, Uber (and Google, and Tesla) would have had an harder time to put such cars on the road?
This language is dangerous for each person who do not understand the math and inner working of them: they can be manipulated too easily.
… it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …
It sounds plausible that autonomous or not, this may have happened. I don’t want to get into an argument over an investigation that I don’t have any insight into – I’d only be able to speculate, as would you.
The final write-up and RCA for this will be a better fit than a news piece.
I know it’s easy to be in a constant state of rage at Uber, and this news makes it extremely easy to pile on. An innocent person died here, at the fault of a team of engineers attempting to something incredibly difficult. I know for sure that this will bring up (and has already, I’m sure) talking head discussions on ethics of AI, who will be charged (why/why not), and tons more litigation and law suits. But, let’s not forget to sympathize with the engineering team here, as well. This has to be the worst feeling ever, and it could have happened to any of us—it had to happen to someone.
My condolences to the innocent pedestrian’s family and friends. Also, my condolences to the team who will carry this loss on their sleeve for the rest of their lives.
It seems like you haven’t read the article very carefully.
You completely forgot to mention the operator behind the wheel - If anyone, that person will most likely be charged and, regardless of the verdict, carry it for the rest of their life.
From https://www.sfchronicle.com/business/article/Exclusive-Tempe-police-chief-says-early-probe-12765481.php:
Two things.
First, the operator is the one person that can hardly be blamed. The idea that a car can drive itself and someone will step in when something goes wrong is fundamentally flawed. Engineers have known about the fact that this doesn’t work for many decades. Understanding what happens at the point of handoff and how long it takes is a fundamental part of aircraft safety and CRM. It takes humans time to asses a situation and step in to take control.
Second, police often blame victims in car crashes. That’s in part why so few ever get prosecuted and the situation doesn’t change. I’ll believe it when Uber releases video of what happened.
Presumably the operator is part of the engineering team, no? I’m not a District Attorney, or an attorney, or even a law enforcement officer. Therefore, I’m unable to comment on whether or not the operator will be charged, if it makes sense to charge this person, or if we’ll find that Uber put on the road a car that was not street legal, which contributed to it.
Please don’t assume I didn’t read carefully. I tried to choose my words carefully in order to not speculate on the details of an on-going investigation.
Exactly. This makes the investigation all that more important. Maybe no one will be charged because investigators will rule it an accident based purely on the fact that, autonomous or not, it was unavoidable based on the pedestrian’s actions.
I think your second point raises an interesting issue. It may have been difficult for a human driver to see this person, but from the information given and all the pictures I’ve seen, it shouldn’t have been difficult for an autonomous driver to see them using different sensors (like depth or IR).
It shouldn’t have been speeding and it should have slowed down further or changed lanes when it saw that it was coming up on a pedestrian in the median.
This is the second incident I know of where an autonomous car has got into trouble, in part, by mimicking stupid human behavior. We have the technology to avoid things like this, and the standard for computer drivers need to be significantly higher than the standard for humans. The NTSB needs to get these things off the road until they’re properly tested.
The fault is actually in the driver, who was instructed to be alert and keep both hands at the wheel at all times. Uber should not have released this obviously and they should get shit for it but I think until there’s nobody behind the wheel the responsibility of any accident falls on the driver, just as it does with planes presently.
The fault lies with the people that put the driver there. It’s beyond comprehension that they would rely on a safety driver. We’ve known for decades that humans cannot effectively monitor a system that’s mostly reliable. The fact that this cannot be done goes back to Kibler (1965), was already understood by Bainbridge (1983), and by Molly & Parasuraman (1995) there was extensive research digging deep into why people are unable to do this and how to design environments where they can.
It is irresponsible of Uber/Waymo/GM and all of the manufacturers to put people in an impossible situation.
Apparently according to reports it required intervention roughly every mile. I do agree there should be laws against putting such a weak system on the road. It should be able to drive unassisted at least as well as a human driver before we put humans behind the wheel, but after that point the driver should be culpable for failing to pay attention. Especially if the driver were for example watching a feature length film in the drivers seat.
If a company knowingly puts you in an impossible situation where you cannot possibly do a task safely without injuring yourself or others they are generally liable, not you. Unless for example, you’re a professional engineer in which case you have a certain responsibility to inform yourself and say no. Those poor drivers don’t know the research behind visual attention, automation, and fatigue. It feels very unfair to prosecute them for doing their jobs, that they have been told they can do, to the best of their abilities, when they’ve been set up for failure.
I completely agree with what you said here.
Now, in retrospect, don’t you think that without such an antropomophic language selling “intellingense” and “learning” of machines, Uber (and Google, and Tesla) would have had an harder time to put such cars on the road?
This language is dangerous for each person who do not understand the math and inner working of them: they can be manipulated too easily.
It sounds plausible that autonomous or not, this may have happened. I don’t want to get into an argument over an investigation that I don’t have any insight into – I’d only be able to speculate, as would you.
The prosecutor seems to think that the car was not at fault because the woman suddenly jumped into the road in the midst of moving traffic and there’s no reasonable way the car could have avoided hitting her: https://www.sfchronicle.com/business/amp/Exclusive-Tempe-police-chief-says-early-probe-12765481.php
can’t wait to see if Uber business practices affected the development/usage/maturity of the software