1. 16

This video is insane!

I worked on autonomous cars way back in 2005 and participated in DARPA’s 2007 Urban Challenge, and the cars back then — 11 years ago! — performed better than this!

To quote the top HN comment:

How did LIDAR and IR not catch that? That seems like a pretty serious problem.

And that’s a severe understatement. The video makes it look like it was “too dark to see”, but to LIDAR she should have been spotted half a football field away! There’s no way it should have missed her.

The dark video footage just makes me wonder whether there’s something fishy going on here, because to a lay person (a human driver) it might look “understandable”, but to an autonomous car? No way!


  2. 16

    To quote another HN comment:

    LIDAR aside, computer vision and a raw video feed is more than enough to have prevented this collision.

    Exactly! Engineers designing autonomous cars are required to account for low-visibility conditions, even way worse than what this video shows (think hail, rain, dust, etc.). This was easy! And yet the car made no signs of slowing down.

    EDIT: twitter comments like this pain me. People need to be educated about the capabilities of autonomous cars:

    She is walking across a dark road. No lights even though she has a bike. She is not in a cross walk. Not the car’s fault.

    Yes it was the car’s fault. This is shocking, extraordinary behavior for an autonomous car.

      1. 9

        In reality, both the pedestrian and the car (and Uber) share some responsibility. You shouldn’t cross a four lane road at night wearing black outside of a crosswalk. A human driver is very unlikely to see you and stop. Not blaming the victim here, just saying it’s easier to stay safe if you don’t do that. However, the promise of autonomous cars with IR and LIDAR and fancy sensors is that they can see better than humans. In this case, they failed. Not to mention the human backup was very distracted, which is really bad.

        From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human. It should be better, it should see better, it should react better. Automatic collision avoidance is a solved problem already in mass-market cars today, and Uber failed it big time. Darkness is an excuse for humans, but not for autonomous cars, not in the slightest.

        She should still be alive right now. Shame on Uber.

        1. 18

          You can’t conclude that someone would not have stopped in time from the video. Not even a little. Cameras aren’t human eyes. They are much much worse in low visibility and in particular with large contrasts; like say those of headlights in the dark. I can see just fine in dark rooms where my phone can’t produce anything aside from a black image. It will take an expert to have a look at the camera and its characteristics to understand how visible that person was and from what distance.

          1. 9

            From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human.

            Certainly not when distracted by a cell phone. If anything, this just provides more evidence that driving while distracted by a cell phone, even in an autonomous vehicle, is a threat to life, and should be illegal everywhere.

            1. 9

              Just for everyone’s knowledge you’re 8 times as likely to get in an accident while texting, that’s double the rate for drinking and driving.

              1. 6

                He was not driving.

                He was carried around by a self driving car.

                I hope that engineers at Uber (and Google, and…) do not need me to note that the very definition of “self driving car” is a huge UI flaw in itself.

                That is obvious to anyone who understand UI, UX or even just humans!

                1. 5

                  She was driving . The whole point now of sitting in a driver seat for a TEST self driving car is for the driver to take over and overcome situations like this.

                  1. 6

                    No, she was not.

                    Without this incident, you would have seen soon a TV spot precisely with a (hot) business woman looking at the new photos uploaded on Facebook by her family. With a voice saying something like: ’we can bring you to those you Like”.

                    The fact that she was paid to drive a prototype does not mean she was an experienced software engineer trained to not trust the AI and to keep continuous control of the car.

                    And indeed the software choosed the speed. At that speed the human intervention was impossible.

                    Also the software did not deviate, despite the free lane beside and despite the fact that the victim had to traversate that lane, so there was enough time for a computer to calculate several alternative trajectories or even simply to alert the victim via light signaling or sounds.

                    So the full responsibility must be tracked back to people at Uber.

                    The driver was just fooled to think that he could trust the AI by an stupidly broken UI.

                    And indeed the driver/passenger reactions were part of the Uber’s test.

                    1. 2

                      Looking at your phone while riding in the drivers seat is a crime for a reason. Uber’s AI failed horribly and all their cars should be recalled, but also the driver failed. If the driver had not been looking at their phone literally any action at all could have been taken to avoid the accident. It’s the responsibility of that driver to stay alert with attention on the road not looking at your phone or reading a book or watching a film, plane pilots do it every single day. Is their attention much more diminished? Yes of course it is. Should we expect literally 0 attention from the “driver”, absolutely no we should not.

                      1. 5

                        Do you realize that the driver/passenger reactions were part of the test?

                        This is the sort of self driving car that Uber and friends want to realize and sell worldwide.

                        And indeed I guess that the “driver” behaviour was pretty frequent among the prototypes’ testers.

                        And I hope somebody will ask Uber to provide in court the recording of all the tests done so far to prove that they did not know drivers do not actually drive.

                        NO. The passenger must not be used as a scapegoat.

                        This is an engineering issue that was completely avoidable.

                        The driver behaviour was expected and desired by Uber

                        1. 4

                          You’ve gotta stop doing this black and white nonsense. Firstly stop yelling. I’m not using the passenger as a scapegoat so I don’t know who you’re talking to. The way the law was written it’s abundantly clear that this technology is to be treated as semi autonomous. That does not mean that Uber is not negligent. If you are sitting in a driver’s seat and you’re watching harry potter while your car drives through a crowd of people you should be found guilty of negligence independent of any charges that come to both the lead engineers and owners of Uber. You have a responsibility to at least take any action at all to prevent deaths that otherwise may be at no fault of your own. You can’t just lounge back while your car murders people, and in the same respect when riding in the drivers seat your eyes should not be on your phone, period.

                          Edit: That image is of a fully autonomous car, not a semi-autonomous car. There is actually a difference despite your repeated protestations. Uber still failed miserably here, and I hope their cars get taken off the road. I know better than to hope their executives will receive any punishment except maybe by shareholders.

                          1. [Comment from banned user removed]

                            1. 3

                              I guess you are not an engineer, Nor a programmer.

                              This isn’t the first time you’ve pulled statements out of a hat as if they are gospel truth without any evidence and I doubt it will be the last. I think your argument style is dishonest and for me this is the nail in the coffin.

                              1. 0

                                I’m not sure I understand what you mean…

                                The UI problem is really evident, isn’t it?

                                The passenger was not perceiving herself as a driver.

                              2. 2

                                If there is “no way” a human can do this, then we’ve certainly never had astronauts pilot a tiny spacecraft to the moon without being able to physically change position, and we certainly don’t have military pilots in fighter jets continuously concentrating while refueling in air on missions lasting 12 hours or more… or… or…. truck drivers driving on roads with no one for miles…or…

                                Maybe Uber is at fault here for not adequately psychologically screening, and training its operators for “scenarios of intense boredom.”

                                1. 0

                                  You are talking about professionals specifically trained to keep that kind of concentration.
                                  And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.

                                  I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.

                                  It’s just a stupid UI design error. A very obvious one to see and to fix.

                                  Do you really need some hints?

                                  1. Remove the car’s control from the AI and turn it into something that enhance the driver’s senses.
                                  2. Make it observes the driver’s state and forbid to start in case of he’s drunk or too tired to drive
                                  3. Stop it from starting if any of its part is not working properly.

                                  This way the responsibility of an incident would be of the driver, not of Uber’s board of directors (unless factory defects, obviously).

                                  1. 4

                                    You’re being adversarial just to try to prove your point, which we all understand.

                                    You are talking about professionals specifically trained to keep that kind of concentration. And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.

                                    A military pilot isn’t being asked (or trained) to operate an autonomous vehicle. You’re comparing apples and oranges!

                                    I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.

                                    Yes, the goal of Uber is to build a self driving car. We know. The goal of Uber is to build a car that is fully autonomous; one that allows all passengers to enjoy doing whatever it is they want to do: reading a book, watching a movie, etc. We get it. The problem is that those goals, are just that, goals. They aren’t reality, yet. And, there are laws in which Uber, and its operators must continue to follow in order for any department of transportation to allow these tests to continue–in order to build up confidence that autonomous vehicles are as safe, or (hopefully) safer than already licensed motorists. (IANAL, nor do I have any understanding of said laws, so that’s all I’ll say there)

                                    It’s just a stupid UI design error. A very obvious one to see and to fix.

                                    So, your point is that the operator’s driving experience should be enhanced by the sensors, and that the car should never be fully autonomous? I can agree to that, and have advocated for that in the past. But, that’s a different conversation. That’s not the goal of Uber, or Waymo.

                                    The reason a pedestrian is dead is because of some combination of flaws in:

                                    • the autonomous vehicle itself
                                    • a distracted operator
                                    • (apparently) a stretch of road with too infrequent cross walks
                                    • a pedestrian jaywalking (perhaps because of the previous point)
                                    • a pedestrian not wearing proper safety gear for traveling at night
                                    • an extremely ambitious engineering goal of building a fully autonomous vehicle that can handle all of these things safely

                                    … in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…

                                    1. 2

                                      … in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…

                                      Upvoted for this.

                                      I’m not being adversarial to prove a point.

                                      I’m just arguing that Uber’s board of directors are responsible and must be accountable for this death.

                                      1. 3

                                        Nobody here is arguing that the board of directors should not be held accountable. You’re being adversarial because you’re bored is my best guess.

                                      2. 2

                                        Very well-said on all of it. If anyone is wondering, I’ll even add to your last point what kind of processes developers of things like autopilots are following. That’s things like DO-178B with so much assurance activities and independent vetting put into it that those evaluated claim it can cost thousands of dollars per line of code. The methods to similarly certify the techniques used in things like deep learning are in the protoype phase working on simpler instances of the tech. That’d have had to do rigorous processes at several times the pace and size at a fraction of the cost of experienced companies… on cutting-edge techniques requiring new R&D to know how to vet.

                                        Or they cut a bunch of corners hacking stuff together and misleading regulators to grab a market quickly like they usually do. And that killed someone who, despite human factors, should’ve lived if the tech (a) worked at all and (b) evaluated against common, road scenarios that could cause trouble. One or both of these is false.

                        2. 2

                          I don’t know if you can conclude that’s the point. Perhaps the driver is there in case the car says “I’m stuck” or triggers some other alert. They may not be an always on hot failover.

                          1. 11

                            They may not be an always on hot failover

                            IMO they should be, since they are testing a high risk alpha technology that has the possibility to kill people.

                    2. 4

                      The car does not share any responsibility, simply because it’s just a thing.

                      Nor does Uber, which again is a thing, a human artifact like others.

                      Indeed we cannot put in jail the car. Nor Uber.

                      The responsibility must be tracked back to people.

                      Who is ultimately accountable for the AI driving the car?

                      I’d say the Uber’s CEO, the board of directors and the stock holders.

                      If Uber was an Italian company, probably the the CEO and the boars of directors would be put in jail.

                      1. 3

                        Not blaming the victim here

                        People often say this when they’re partly blaming the victim to not seem overly mean or unfair. We shouldn’t have to when they do deserve partial blame based on one fact: people who put in a bit of effort to avoid common problems/risks are less likely to get hit with negative outcomes. Each time someone ignores one to their peril is a reminder of how important it is to address risks in a way that makes sense. A road with cars flying down it is always a risk. It gets worse at night. Some drivers will have limited senses, be on drugs, or drunk. Assume the worst might happen since it often does and act accordingly.

                        In this case, it was not only a four lane road at night the person crossed: people who live in the area on HN said it’s a spot noticeably darker than the other dark spots that stretches out longer. Implication is that there are other places on that road with with more light. When I’m crossing at night, I do two to three things to avoid being hit by a car:

                        (a) cross somewhere where there’s light

                        (b) make sure I see or hear no car coming before I cross.

                        Optionally, (c) where I cross first 1-2 lanes, get to the very middle, pause for a double check of (b), and then cross next two.

                        Even with blame mostly on car & driver, the video shows the human driver would’ve had relatively little reaction time even if the vision was further out than video shows. It’s just a bad situation to hit a driver with. I think person crossing at night doing (a)-(c) above might have prevented the accident. I think people should always be doing (a)-(c) above if they value their life since nobody can guarantee other people will drive correctly. Now, we can add you can’t guarantee their self-driving cars will drive correctly.

                        1. 2

                          Well put. People should always care about their own lifes.
                          And they cannot safely assume that others will care as much.

                          However note that Americans have learned to blame “jaywalking” by strong marketing campaigns after 1920.

                          Before, the roads were for people first.

                          1. 2

                            I just saw a video on that from “Adam Ruins Everything.” You should check that show out if you like that kind of stuff. Far as that point, it’s true that it was originally done for one reason but now we’re here in our current situation. Most people’s beliefs have been permanently shaped by that propaganda. The laws have been heavily reinforced. So, our expectations of people’s actions and what’s lawful must be compatible with those until they change.

                            That’s a great reason to consider eliminating or modifying the laws on jaywalking. You can bet the cops can still ticket you on it, though.

                        2. 3

                          In reality, both the pedestrian and the car (and Uber) share some responsibility.

                          I’ve also seen it argued (convincingly, IMO) that poor civil engineering is also partially responsible.

                        3. 3

                          And every single thing you listed is mitigated by just slowing down.

                          Camera feed getting fuzzy ? Slow down. Now you can get more images of what’s around you, combine them for denoising, and re-run your ML classifiers to figure out what the situation is.

                          ML don’t just classify what’s in your sensor feeds. They also give you numerical measures for how close your feed is to the data they previously trained on. When those measures decline,, it could be because the sensors are malfunctioning. It could be rain’/dust/etc. It could be a novel untrained situation. Every single one of those things can be mitigated by just slowing down. In the worst case, you come to a full stop and tell the rider he needs to drive.

                        4. 6

                          Since we have the law tag on this, I’ll be the one to suggest it:

                          Autonomous car designs should be required to have a sign-off from a licensed Professional Engineer.

                          1. 4

                            Ars Technica has some more video of the crash location and analysis.

                            There’s going to be a long trickle of updates on the investigation until there’s anything final like a report from the NTSB, state agency, or Uber itself. Let’s post them here to the comments rather than one at a time.

                            1. 1

                              I prefer them posted separately.

                              Posting them here as comments hides the updates, as this thread is already in the second page of Lobster (and it’s going to be archived soon in many feed readers).

                              It’s wise to link the posts here too (and to post this discussion from the new posts).

                              1. 3

                                There’s basically nothing to learn in these updates, and they’re generally pretty bad threads as we second-guess the much-better-informed investigators. Breaking news stories are so thin the threads are dominated by people re-litigating some other, weakly-related but highly emotional topic rather than including any significant technical content. Look at this one: we’re talking about other things Uber’s done, speculating the video’s doctored, guessing at the capabilities of cameras and lidar on the car, defining the term “snuff film”. There’s one substantive conversation about the limits of human attention and where to assign culpability for this failure, but rather than a thoughtful explorations of the topic it’s skating at the edge of a flame war. When stories have little content about emotionally charged topics, threads generally spiral into increasingly ugly places and become net-negative for the community.

                                Folks who are especially interested in this topic can bookmark it and always see messages since their last read highlighted or follow it on other, more news-oriented sites.

                                1. 1

                                  For what it worth, I do not agree.

                                  You literaly have one single meta thread that is a flame: the one defining the whole post as rubbernecking.

                                  The rest seems a technically rooted discussion on AI and Law related stuffs.

                                  We don’t second-guess investigators, we reflect together, from a worldwide perspective, about the legal and social implications of the facts. Such world wide perspective is valuable to my eye. Also because it’s unlikely that Uber (or Google or whoever) won’t export his product outside USA.

                                  So, I’m not going to willingly violate the administrative policy of the site, but I think you should reconsider your decision.

                                  1. 3

                                    Thanks for digging in to explain your opinion, I appreciate it and I slept on it before responding. My personal, not-official policy longtime thinking is that news doesn’t lead to interesting, informative threads since forums don’t know enough about the particular facts to speak about an event, so they end up in increasingly off-topic hypotheticals that often become fights.

                                    And then speaking from a moderator perspective, this is the first thread in three weeks and the only the second in more than three months where a conversation developed into a flame war where I deleted personal attacks to try to stop it from getting worse and harming the regular collaborative atmosphere.

                                    When I wrote the comment above saying we should post links here rather than in new stories I deliberately left off the moderator hat because I was trying to express my personal opinion rather than set an editorial policy. It’s a pretty fine hair to split, but it’s what the hats are for. I’ll try to be even more explicit about the distinction in the future. I don’t have enough confidence in this opinion that I think it should be a hard rule on this story or in general, though if my opinions have changed since writing the linked comments it’s only to grow stronger.

                                    It’s hard to do a experiment when everyone knows they’re in it, but let’s use this event. If you (or anyone reading!) sees a good link about this event, please do post it as a new story, and let’s evaluate the conversations for depth, novelty, and civility in a year or so. Maybe we’ll know enough then to draft an some explicit language about breaking news for the about page or story submission guidelines on the ‘Submit Story’ page.

                            2. 7

                              To me, this is an exploitative post and not worthy of inclusion on lobste.rs. I encourage people to flag this submission as off-topic.

                              1. 11

                                There is no technical content here, just a clip of somebody dying.

                                Flagged for rubbernecking. I am very disappointed.

                                1. 6

                                  I disagree. The video does show that the pedestrian should have been blatantly obvious to any of the LIDAR/IR sensors that are supposed to be used in self-driving cars, yet the car didn’t even try to slow down or dodge before plowing straight into her. This may be the only direct evidence of this we get, unless there’s some serious legal action against Uber in connection with this.

                                  1. 6

                                    Do we know what the make and model of the self-driving car was? Do we know what sensor suite is available? Do we know what processing power or routines were on it?

                                    Without links to that information, this sort of thread is just Monday-morning quarterbacking by webshits.

                                    1. 2

                                      Do we know all of the details? Nope! But I reject the idea that nothing can be said about this incident until we know every detail. This video provides important facts, we’ve already gotten some analysis here by experts in the field, and it constitutes a wedge to get more information. We now know that this collision should have been easily avoidable, and this puts more pressure on Uber to explain how it was possible, or risk many informed people believing that their self driving tech is a shit show.

                                      1. 3

                                        This video provides important facts, we’ve already gotten some analysis here by experts in the field, and it constitutes a wedge to get more information.

                                        We have a bunch of people not actively involved in autonomous car research (or if they are, without links) talking about what is “obviously” going on, and how “easy” something is.

                                        This whole thing is worth discussing, in depth, once the inevitable engineering report is released. Until then, cluttering up Lobsters with a video of a person getting killed so we can clutch pearls is a waste of both time and space.

                                      2. [Comment removed by moderator pushcx: Stop it with the insults.]

                                    2. 3

                                      This is an update to an ongoing story that has included technical content. It’s no “a clip”, it’s “the clip” that we’ve been waiting to see since we all have both technical and non-technical skin in this game.

                                      1. 2

                                        This is a discussion about the safety of self-driving cars and the relevant tech, with evidence of an important abnormality. Flagging this for “not being tech” makes zero sense.

                                        1. 11

                                          If you post a loaf of bread and then have a discussion about oven PID controllers it doesn’t vindicate the loaf of bread submission.

                                          If you had posted, say, an article about obstacle-avoidance at speed and in the comments or story description linked this video, that would’ve been fine. You basically just dropped a Twitter snuff film and then linked to some random HN and Twitter comments.

                                          As it is, the discussion so far has been users pontificating about things they probably don’t know much about, failing to link useful engineering information (except the human factors stuff @nickpsecurity did), and in general just lowering the overall quality of the site.

                                          EDIT: This sort of submission is the exact worst sort of thing, because it is content matter that everybody has an opinion about and very few have expertise in.

                                          1. 3

                                            a Twitter snuff film

                                            It is not a “snuff film”, the footage is edited.

                                            EDIT: This sort of submission is the exact worst sort of thing, because it is content matter that everybody has an opinion about and very few have expertise in.

                                            @angrysock, your comment is the “worst sort of thing”.

                                            You’re taking a technical discussion about how self-driving cars behave, commentary from several real-world experts (including myself), primary-source footage of a critically important event in the field, and turning it into a bickering fest.

                                            Please go away unless you have something technical to add to the discussion and commentary on self-driving tech. Downvoted for concern-trolling.

                                      2. 3

                                        I gave @Shamar this article recently about human vs automated control. One example in it is a human using a self-driving vehicle that switches over to manual recognizing a disaster that’s about to happen in a split second. The author questions what we expect the human to do to handle the situation the vehicle couldn’t if they weren’t even thinking about the road at the time. Watch the video of the person’s reaction to find that the hypothesis just got tested with a similar scenario.

                                        It’s as bad as one would guess both for an automated car deployed too early and the distracted driver reacting too late.

                                        1. 2

                                          She was not even driving, indeed.

                                          But that’s perfectly obvious if you consider the goals of the “self driving car”: they want to sell “mind blowing” vehicles like this worldwide.

                                          To get there, they had to verify tha passengers can trust the AI and let it drive on their behalf.

                                          Do you really think it was the first time a tester was recorded while distracted at the drive seat of one of these prototypes?

                                          I guess that, if it was a problem for Uber, the tester would be replaced with one actually driving the car!

                                          Indeed the passenger reactions were under test just like the AI.

                                          Uber wanted the driver to trust the AI that way.

                                          1. 4

                                            Uber wanted the driver to trust the AI that way.

                                            Uber wants its customers to trust their AI. Uber should not trust its own AI when it is in development.

                                        2. 3

                                          It looks like some kind of accident was inevitable, but it’s really scary that the car never braked. In the video I could clearly see something was in the road starting at 0:03, a full 3 seconds before the car hit her. With some simple image processing (increasing exposure, contrast, etc.) the computer should have seen her earlier than that, and it shouldn’t have taken 3 seconds to apply the brakes. If it had swerved and braked it may not have hit her. If the car was equipped with an infrared camera it would’ve easily seen her from much further away.

                                          IMO if we’re going to automate away driving we should make it safer and get rid of car accidents while we’re at it. They’re a stupid way to die, and there are technical solutions to a lot of the causes.

                                          1. 3

                                            Crash, not accident. Accident implies it’s without fault or somehow unavoidable. There are accidents but I’m not sure this is one.

                                            That said, I have over 150,000 miles on motorcycles and close to three times that driving cars but I am approaching 50 and my night vision has declined. Based on the video, I am not sure that I could reacted effectively, either hit the brake or swerved or both in the ~2.5 seconds between visibility and crash but I am sure that my experience would have me slow a little and scan more. I am highly skeptical that the vehicle sensors are worse than my eyes.

                                            1. 4

                                              Rather than the video, I want to see the car’s telemetry stream just prior to the collision. Logs too, please.

                                              1. 2

                                                The video is scary. The operator was obviously distracted and caught completely off guard, as shown in the “OH SHIT” expression

                                                1. 2

                                                  Very curious how the Google car would have fared in this scenario. Anyway, deeply dissapointing the car didn’t tried to brake, this seems a total failure for self driving car.

                                                  1. 3

                                                    Almost certainly would have braked. This is like trivial for a lidar. Empty road, single obstacle moving at a slow pace and the low light conditions should help not hurt.

                                                  2. 0


                                                    This video is insane

                                                    I don’t mind lobsters posts adding useful context to links, but I could do without zero value editorial statements. Especially ones like this that make use of ableist language which might make me hesitant to invite users to the site.