1. 25

    I’d like to provide a more sympathetic outside perspective.

    There are a few common complaints about Elm and Elm community:

    • Blocking discussion (locking threads etc.)
    • Insufficient communication about Elm development
    • Things getting removed from the language.

    With regards to blocking discussion, I think the logic is something like this:

    • The core developers have a roadmap for Elm development and they want to stick to it
    • They tried including more developers but haven’t found an effective way to deal with more contributors
    • Therefore, they have limited time
    • They can spend this time rehashing the same arguments and debating half-baked ideas, or they can spend this time following their roadmap, but not both.

    I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

    With regards to insufficient communication, I think the main things to understand is that Elm is an experiment in doing things differently, and it’s causing a clash with conventional understanding. Elm is about getting off the upgrade treadmill. So, for example, when a new release like Elm 0.19 comes out, it happens without a public alpha and beta phases, and it’s not actually the point where you go and immediately migrate your production code to it! It’s only the point to start experimenting with it, it’s the point where library and tool authors can upgrade and so on. (There was quite a bit of activity prior to release anyway, it just wasn’t advertised publicly.)

    Finally, the most contentious example of a “feature” getting removed is the so called native modules (which basically means the ability to have impure functions written in JS in your Elm code base). As far as I can tell (having followed Elm since 0.16), native modules were always an internal implementation detail and their use was never encouraged. Nevertheless, some people started using them as a shortcut anyway. However, they were a barrier to enabling function-level dead code elimination which is the main feature of the 0.19 release, so the loophole was finally closed. Sure, it’s inconvenient for people who used them, but does anyone complain when, say, Apple removes an internal API?

    Ultimately, Elm is just an open source project and the core maintainers don’t really owe anybody anything - no contracts are entered into and no funds are exchanged. They can do whatever they want.

    Of course, there is a question of the long term effects this approach is going to have on the community. Will it alienate too many people and cause Elm to wither? Will Elm remain a niche language for a narrow class of applications? That remains to be seen.

    1. 19

      but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time.

      Over the years, I have come to believe this is a vital part of building a community. Using draconian tactics to stomp out annoying comments is using power unwisely and worse yet – cripples your community in multiple ways.

      The first thing to remember is that when a comment (entitled, uninformed or otherwise) comes up repeatedly – that is a failure of the community to provide a resource to answer/counter/assist with that comment. That resource can be a meme, a link, an image, a FAQ, a full on detailed spec document, whatever. This type of thing is part of how a community gets a personality. I think a lot of the reason there are a bunch of dead discourse servers for projects is too stringent policing. You should have a place for people to goof off and you have to let the community self police and become a real community. Not entirely, obviously, but on relevant topics.

      This constant repetition of questions/comments is healthy, normal, it is the entrance of new people to the community. More importantly, if gives people who are just slightly deeper in the community someone to help, someone to police, someone to create resources for, even to a degree someone to mock (reminding them they aren’t THAT green anymore) – a way to be useful! This is a way to encourage growth, each “generation” of people helps the one that comes after them – and it is VITAL for building up a healthy community. In a healthy community the elders will only wade in occasionally and sporadically to set the tone and will focus on the more high minded, reusable solutions that move the project forward. Leave the minor stuff be done by the minor players, let them shine!

      Beyond being vital to build the community – it is a signal of where newcomers are hurting. Now if documentation fixes the problem, or a meme… terrific! But if it doesn’t, and if it persists … that is a pain point to look at – that is a metric – that is worth knowing.

      1.  

        Yeah, each one of these people gives you a chance to improve how well you communicate, and to strengthen your message. But shutting down those voices then run the risk of surrounding yourself with ‘yes people’ who don’t challenge your preconceptions. Now, it’s entirely up to the Elm people to do this, but I think they are going to find it harder to be mainstream with this style of community.

        Note that I’m perfectly fine with blocking and sidelining people who violate a CoC, or are posting hurtful, nonconstructive comments. You do have to tread a fine line in your moderation though. Being overly zealous in ‘controlling the message’ can backfire in unpredictable ways.

        Anyway, I continue to follow Elm because I think the designer has some excellent ideas and approaches, even if I do disagree with some of the ways the way the community is managed.

        1.  

          even if I do disagree with some of the ways the way the community is managed.

          I don’t think the two jobs (managing the community and managing the project) should necessarily be done by the same person. I actually think it probably shouldn’t. Each job is phenomenally challenging on its own – trying to do both is too much.

          1.  

            Yeah, completely agree! I think it would take a huge weight of that person’s shoulders too! :)

            1.  

              I don’t think Evan personally moderates the forums. Other people do it these days.

              1.  

                But, they do it on his behalf? This policy of locking and shutting down discussions comes from somewhere. That person directly or indirectly is the person who “manages” the community, the person who sets the policies/tone around such things.

                I personally have no idea, I am not active in the Elm community.

                1.  

                  I’m not sure who sets the policy and how.

          2.  

            That’s a very interesting perspective, thanks.

          3. 10

            I’ll add the perspective of someone who loved Elm and will never touch it again. We’re rewriting in PureScript right now :) I’m happy I learned Elm, it was a nice way of doing things while it lasted.

            In Elm you may eventually hit a case where you can’t easily wrap your functionality in ports, the alternative to native modules. We did, many times. The response on the forum and other places is often to shut down your message, to give you a partial version of that functionality that isn’t quite what you need, to tell you to wait until that functionality is ready in Elm (a schedule that might be years!), or until recently to point you at native modules. This isn’t very nice. It’s actually very curious how nice the Elm community is unless you’re talking about this feature, in which case it feels pretty hostile. But that’s how open source rolls.

            Look at the response to message linked in the story: “We recently used a custom element to replace a native modules dealing with inputs in production at NoRedInk. I can’t link to it because it’s in a proprietary code base but I’ll be writing an speaking about it over the next couple months.”

            This is great! But I can’t wait months in the hope that someone will talk about a solution to a problem I have today. Never mind releasing one.

            Many people did not see native modules as a shortcut or a secret internal API. They were an escape valve. You would hit something that was impossible without large efforts that would make you give up on Elm as not being viable. Then you would overcome the issues using native modules which many people in the community made clear was the only alternative. Now, after you invest effort you’re told that there’s actually no way to work around any of these issues without “doing them the right way” which turns out to be so complicated that companies keep them proprietary. :(

            I feel like many people are negative about this change because it was part of how Elm was sold to people. “We’re not there yet, but here, if we’re falling short in any way you can rely on this thing. So keep using Elm.”

            That being said, it feels like people are treating this like an apocalypse, probably because they got emotionally invested in something they like and they feel like it’s being changed in a way that excludes them.

            You’re right though. Maybe in the long term this will help the language. Maybe it will not. Some people will enjoy the change because it does lead to a cleaner ecosystem and it will push people to develop libraries to round out missing functionality. In the short term, I have to get things done. The two perspectives often aren’t compatible.

            I’m personally more worried about what will happen with the next major change where Elm decides to jettison part of its community. I don’t want to be around for that.

            1.  

              If people encouraged you to use native modules, then that was unfortunate.

              I’m not sure I understand the issue with custom elements. Sure, they’re a bit complicated and half baked but it certainly doesn’t require a research lab to use them (in fact, I’ve just implemented one now).

              I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

              I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

            2. 9

              I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

              I’ll go one further and say I’m quite glad those discussions get locked. Once the core team has made a decision, there’s no point in having angry developers fill the communication channels the community uses with unproductive venting. I like the decisions the core team is making, and if those threads didn’t get locked, I’d feel semi-obligated to respond and say that I’m in favor of the decision, or I’d feel guilty not supporting the core devs because I have other obligations. I’m glad I don’t have to wade through that stuff. FWIW, it seems like the community is really good at saying “We’re not going to re-hash this decision a million times, but if you create a thread about a specific problem you’re trying to solve, we’ll help you find an approach that works” and they follow through on that.

              I don’t have a lot of sympathy for folks who are unhappy with the removal of the ability to compile packages that include direct JS bindings to the Elm runtime. For as long as I’ve been using Elm the messaging around that has consistently been that it’s not supported, it’s just an accidental side effect of how things are built, and you shouldn’t do it or you’re going to have a bad time. Now it’s broken and they’re having a bad time. This should not be a surprise. I also think it’s good decision to actively prohibit it. If people started using that approach widely, it would cause a lot of headaches for both the community and hamstring the core team’s ability to evolve the language.

              1. 6

                I’m quite glad those discussions get locked

                and

                I like the decisions the core team is making

                Do you believe your perspective would change if you didn’t agree with the developers decisions? Obviously I have a different perspective but I am curious if think you would still have this perspective if you were on the other side?

                Additionally, just because the core team has “made a decision” doesn’t mean it wasn’t a mistake, nor that it is permanent. Software projects make mistakes all the time, and sometimes the only way to really realize the mistake is the hear the howls of your users.

                1.  

                  I’m pretty confident I wouldn’t change my position on this if I wasn’t in agreement with the core team’s choices. I might switch to PureScript or ReasonML, if I think the trade-offs are worth it, but I can’t see myself continuing to complain/vent after the decision has been made. I think appropriate user input is “I have this specific case, here’s what the code look like, here’s the specific challenge with any suggested alternative” If the core team decides to go another way after seeing their use cases, it’s clear we don’t have the same perspective on the trade-offs for those decisions. I can live with that. I don’t expect everybody to share my opinion on every single technical decision.

                  As an example, I use Clojure extensively at work, and I very much disagree with Rich Hickey’s opinions about type systems, but it’s pretty clear he’s thought through his position and random folks on the internet screaming differently isn’t going to change it, it’ll just make his job more difficult. I can’t imagine ever wanting to do that to someone.

                  sometimes the only way to really realize the mistake is the hear the howls of your users

                  It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl. They can usually speak pretty clearly about how decisions impact their work and are able to move on when it’s clear there’s a fundamental difference in goals.

                  1.  

                    It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl.

                    We fundamentally disagree on this point (and the value of the noisy new users), and I don’t think either of us is going to convince the other. So, I think this is a classic case of agree to disagree.

              2. 10

                I think what bothers me the most about the core team’s approach to features is not that they keep removing them, but that for some they do not provide a valid alternative.

                They’ll take away the current implementation of native modules, but coming up with a replacement is too hard, so even though the core libraries can use native code, us peasants will have to do without.

                They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard, so even though the base library has a few magic typeclasses for its convenience, us peasants will have to make do with mountains of almost duplicated code and maybe some code generation tool.

                So where does that leave Elm right now? Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language, because of all the breaking changes between releases, all the things that haven’t been figured out yet, and how the response to requests for ways to do things that are necessary in real code is either “you don’t need that”, which I can live with most of the time, or “deal with it for the moment”, which is unacceptable.

                I think Elm should make it more clear that it’s ostensibly an unfinished project.

                1.  

                  They’ll take away the current implementation of native modules, but coming up with a replacement is too hard

                  They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard

                  I don’t think this is a fair characterization of the core team’s reasons for not supporting those features. I’ve read/watched/listened to a lot of the posts/videos/podcasts where Evan and other folks discuss these issues, and I don’t think I’ve ever heard anyone say “We can’t do it because it’s too difficult.” There’s almost always a pretty clear position about the trade-offs and motivations behind those decisions. You might not agree with those motivations, or weigh the trade-offs the same way, but it’s disingenuous to characterize them as “it’s too hard”

                  1.  

                    I exaggerate in my comment, but what I understood from the discussions around rank n polymorphism I’ve followed is basically that Evan doesn’t think any of the existing solutions fit Elm.

                    I understand that language design, especially involving more complex features like this, is a hard issue, and I’m sure Evan and the core team have thought long and hard about this and have good reasons for not having a good solution yet, but the problem remains that hard things are hard and in the meantime the compiler can take an escape hatch and the users cannot.

                  2.  

                    Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language

                    I always struggle with this line of questioning because “incomplete and broken” describes pretty much all of the web platform in the sense that whenever you do non-trivial things, you’re going to run into framework limitations, bugs, browser incompatibilities and so on.

                    All you can do is evaluate particular technologies in the context of your specific projects. For certain classes of problems, Elm works well and is indeed better than other options. For others, you’ll have to implement workarounds with various degrees of effort. But again, I can say the same thing for any language and framework.

                    Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

                    Taking a tangent, the main problem is that Elm is being built on top of the horrifically complex and broken foundation that is the web platform. It’s mostly amazing to me that anything works at all.

                    1. 7

                      To me the problem is that Elm is not conceptually complete. I listed those issues specifically because they’re both things that the compiler and the core libraries can do internally, but the users of the language cannot.

                      But at least Elm is no worse than anything else.

                      No, Elm is a language, and not being able to do things in a language with so few metaprogramming capabilities is a pretty big deal compared to a missing feature in a library or a framework, which can easily be added in your own code or worked around.

                      1.  

                        But how is this different from any other ecosystem? The compiler always has more freedom internally. There are always internal functions that platform APIs can use but your library cannot. Following your logic, we should condemn the Apple core APIs and Windows APIs too.

                        1.  

                          No, what I meant is that the core libraries use their “blessed” status to solve those problems only for themselves, thus recognizing that those problems effectively exist, but the users aren’t given any way to deal with them.

                          1.  

                            But there are actually solutions on offer: ports and custom elements. What’s wrong with using them?

                            1.  

                              Ports are very limiting and require much more work to set up than a normal library, and I haven’t used custom elements so I can’t speak for those.

                              There’s also no workaround for the lack of ad-hoc polymorphism. One of the complaints I hear the most about Elm is that writing json encoders and decoders is tedious and that they quickly become monstrously big and hard to maintain; often the json deserialization modules end up being the biggest modules in an Elm project.

                              This is clearly a feature the language needs (and already uses with some compiler magic, in the form of comparable, appendable, and so on).

                      2. 7

                        Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

                        Having worked with ClojureScript on the front-end for the past 3 years, I strongly disagree with this statement. My team has built a number of large applications using Reagent and whenever new versions of ClojureScript or Reagent come out all we’ve had to do was bump up the versions. We haven’t had to rewrite any code to accommodate the language or Reagent updates. My experience is that it’s perfectly possible to build robust and stable tools on top of the web platform despite its shortcomings.

                        1.  

                          I think the ease of upgrades is a different discussion. There is a tool called elm-upgrade which provides automated code modifications where possible. That’s pretty nice, I haven’t seen a lot of languages with similar assistance.

                          My point was, you cannot escape the problems of the web platform when building web applications. Does ClojureScript fully insulate you from the web platform while providing all of its functionality? Do you never run into cross-browser issues? Do you never have to interoperate with JavaScript libraries? Genuinely asking - I don’t know anything about ClojureScript.

                          1.  

                            My experience is that vast majority of issues I had with the web platform went away when my team started using ClojureScript. We run into cross-browser issues now and then, but it’s not all that common since React and Google Closure do a good job handling cross-browser compatibility. Typically, most of the issues that we run into are CSS related.

                            We interoperate with Js libraries where it makes sense, however the interop is generally kept at the edges and wrapped into libraries providing idiomatic data driven APIs. For example, we have a widgets library that provides all kinds of controls like data pickers, charts, etc. The API for the library looks similar to this to our internal widgets API.

                            1.  

                              Sounds like a great development experience!

                              Let me clarify my thinking a bit. For a certain class of problems, Elm is like that as well. But it certainly has limitations - not a huge number of libraries etc.

                              However, I think that pretty much everything web related is like that - limitations are everywhere, and they’re much tighter than I’d like. For example, every time I needed to add a date picker, it was complicated, no matter the language/framework. But perhaps your widgets library has finally solved it - that would be cool!

                              So I researched Elm and got a feel for it’s limitations, and then I could apply it (or not) appropriately.

                              I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

                              I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

                              1.  

                                Yeah, I agree that the main question is around the state of Elm. If the message is that Elm isn’t finished, and don’t invest into it unless you’re prepared to invest time into keeping up, that’s perfectly fine. However, if people are being sold on a production ready language that just works there appears to be a bit of a disconnect there.

                                It’s obviously important to get things right up front, and if something turns out not to work well it’s better to change it before people get attached to it. On the other hand, if you’re a user of a platform then stability is really important. You’re trying to deliver a solution to your customers, and any breaking changes can become a serious cost to your business.

                                I also think it is important to be pragmatic when it comes to API design. The language should guide you to do things the intended way, but it also needs to accommodate you when you have to do something different. Interop is incredibly important for a young language that’s leveraging a large existing ecosystem, and removing the ability for people to use native modules in their own projects without an alternative is a bit bewildering to me.

                          2.  

                            I have the opposite experience. Team at day job has some large CLJS projects (also 2-3 years old) on Reagent and Re-Frame. We’re stuck on older versions because we can’t update without breaking things, and by nature of the language it’s hard to change things with much confidence that we aren’t also inadvertently breaking things.

                            These projects are also far more needlessly complex than their Elm equivalents, and also take far longer to compile so development is a real chore.

                            1.  

                              Could you explain what specifically breaks things in your project, or what makes it more complex than the Elm equivalent. Reagent API had no regressions in it that I’m aware of, and re-frame had a single breaking change where the original reg-sub was renamed to reg-sub-raw in v 0.7 as I recall. I’m also baffled by your point regarding compiling. The way you develop ClojureScript is by having Figwheel or shadow-cljs running in the background and hotloading code as you change it. The changes are reflected instantly as you make them. Pretty much the only time you need to recompile the whole project is when you change dependencies. The projects we have at work are around 50K lines of ClojureScript on average, and we’ve not experienced the problems you’re describing.

                      3.  

                        I was hoping to read other points of view on that matter, thanks for taking the time writing down yours!

                      1. 4

                        I know it’s easy to be in a constant state of rage at Uber, and this news makes it extremely easy to pile on. An innocent person died here, at the fault of a team of engineers attempting to something incredibly difficult. I know for sure that this will bring up (and has already, I’m sure) talking head discussions on ethics of AI, who will be charged (why/why not), and tons more litigation and law suits. But, let’s not forget to sympathize with the engineering team here, as well. This has to be the worst feeling ever, and it could have happened to any of us—it had to happen to someone.

                        My condolences to the innocent pedestrian’s family and friends. Also, my condolences to the team who will carry this loss on their sleeve for the rest of their lives.

                        1. 2

                          It seems like you haven’t read the article very carefully.

                          1. You completely forgot to mention the operator behind the wheel - If anyone, that person will most likely be charged and, regardless of the verdict, carry it for the rest of their life.

                          2. From https://www.sfchronicle.com/business/article/Exclusive-Tempe-police-chief-says-early-probe-12765481.php:

                          … it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …

                          … I suspect preliminarily it appears that the Uber would likely not be at fault in this accident, either, …

                          Sylvia Moir, police chief in Tempe, Arizona

                          1. 9

                            Two things.

                            First, the operator is the one person that can hardly be blamed. The idea that a car can drive itself and someone will step in when something goes wrong is fundamentally flawed. Engineers have known about the fact that this doesn’t work for many decades. Understanding what happens at the point of handoff and how long it takes is a fundamental part of aircraft safety and CRM. It takes humans time to asses a situation and step in to take control.

                            Second, police often blame victims in car crashes. That’s in part why so few ever get prosecuted and the situation doesn’t change. I’ll believe it when Uber releases video of what happened.

                            1. 1
                              1. You completely forgot to mention the operator behind the wheel - If anyone, that person will most likely be charged and, regardless of the verdict, carry it for the rest of their life.

                              Presumably the operator is part of the engineering team, no? I’m not a District Attorney, or an attorney, or even a law enforcement officer. Therefore, I’m unable to comment on whether or not the operator will be charged, if it makes sense to charge this person, or if we’ll find that Uber put on the road a car that was not street legal, which contributed to it.

                              Please don’t assume I didn’t read carefully. I tried to choose my words carefully in order to not speculate on the details of an on-going investigation.

                              1. … it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …

                              Exactly. This makes the investigation all that more important. Maybe no one will be charged because investigators will rule it an accident based purely on the fact that, autonomous or not, it was unavoidable based on the pedestrian’s actions.

                              1. 1

                                I think your second point raises an interesting issue. It may have been difficult for a human driver to see this person, but from the information given and all the pictures I’ve seen, it shouldn’t have been difficult for an autonomous driver to see them using different sensors (like depth or IR).

                                It shouldn’t have been speeding and it should have slowed down further or changed lanes when it saw that it was coming up on a pedestrian in the median.

                                This is the second incident I know of where an autonomous car has got into trouble, in part, by mimicking stupid human behavior. We have the technology to avoid things like this, and the standard for computer drivers need to be significantly higher than the standard for humans. The NTSB needs to get these things off the road until they’re properly tested.

                              2. 1

                                The fault is actually in the driver, who was instructed to be alert and keep both hands at the wheel at all times. Uber should not have released this obviously and they should get shit for it but I think until there’s nobody behind the wheel the responsibility of any accident falls on the driver, just as it does with planes presently.

                                1. 7

                                  The fault lies with the people that put the driver there. It’s beyond comprehension that they would rely on a safety driver. We’ve known for decades that humans cannot effectively monitor a system that’s mostly reliable. The fact that this cannot be done goes back to Kibler (1965), was already understood by Bainbridge (1983), and by Molly & Parasuraman (1995) there was extensive research digging deep into why people are unable to do this and how to design environments where they can.

                                  It is irresponsible of Uber/Waymo/GM and all of the manufacturers to put people in an impossible situation.

                                  1. 1

                                    Apparently according to reports it required intervention roughly every mile. I do agree there should be laws against putting such a weak system on the road. It should be able to drive unassisted at least as well as a human driver before we put humans behind the wheel, but after that point the driver should be culpable for failing to pay attention. Especially if the driver were for example watching a feature length film in the drivers seat.

                                    1. 2

                                      If a company knowingly puts you in an impossible situation where you cannot possibly do a task safely without injuring yourself or others they are generally liable, not you. Unless for example, you’re a professional engineer in which case you have a certain responsibility to inform yourself and say no. Those poor drivers don’t know the research behind visual attention, automation, and fatigue. It feels very unfair to prosecute them for doing their jobs, that they have been told they can do, to the best of their abilities, when they’ve been set up for failure.

                                      1. 1

                                        I completely agree with what you said here.

                                        Now, in retrospect, don’t you think that without such an antropomophic language selling “intellingense” and “learning” of machines, Uber (and Google, and Tesla) would have had an harder time to put such cars on the road?

                                        This language is dangerous for each person who do not understand the math and inner working of them: they can be manipulated too easily.

                                  2. 1

                                    … it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway, …

                                    It sounds plausible that autonomous or not, this may have happened. I don’t want to get into an argument over an investigation that I don’t have any insight into – I’d only be able to speculate, as would you.

                                1. 16

                                  To quote another HN comment:

                                  LIDAR aside, computer vision and a raw video feed is more than enough to have prevented this collision.

                                  Exactly! Engineers designing autonomous cars are required to account for low-visibility conditions, even way worse than what this video shows (think hail, rain, dust, etc.). This was easy! And yet the car made no signs of slowing down.

                                  EDIT: twitter comments like this pain me. People need to be educated about the capabilities of autonomous cars:

                                  She is walking across a dark road. No lights even though she has a bike. She is not in a cross walk. Not the car’s fault.

                                  Yes it was the car’s fault. This is shocking, extraordinary behavior for an autonomous car.

                                    1. 9

                                      In reality, both the pedestrian and the car (and Uber) share some responsibility. You shouldn’t cross a four lane road at night wearing black outside of a crosswalk. A human driver is very unlikely to see you and stop. Not blaming the victim here, just saying it’s easier to stay safe if you don’t do that. However, the promise of autonomous cars with IR and LIDAR and fancy sensors is that they can see better than humans. In this case, they failed. Not to mention the human backup was very distracted, which is really bad.

                                      From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human. It should be better, it should see better, it should react better. Automatic collision avoidance is a solved problem already in mass-market cars today, and Uber failed it big time. Darkness is an excuse for humans, but not for autonomous cars, not in the slightest.

                                      She should still be alive right now. Shame on Uber.

                                      1. 18

                                        You can’t conclude that someone would not have stopped in time from the video. Not even a little. Cameras aren’t human eyes. They are much much worse in low visibility and in particular with large contrasts; like say those of headlights in the dark. I can see just fine in dark rooms where my phone can’t produce anything aside from a black image. It will take an expert to have a look at the camera and its characteristics to understand how visible that person was and from what distance.

                                        1. 9

                                          From the video I don’t think a human would have stopped in time either, but Uber’s car isn’t human.

                                          Certainly not when distracted by a cell phone. If anything, this just provides more evidence that driving while distracted by a cell phone, even in an autonomous vehicle, is a threat to life, and should be illegal everywhere.

                                          1. 9

                                            Just for everyone’s knowledge you’re 8 times as likely to get in an accident while texting, that’s double the rate for drinking and driving.

                                            1. 6

                                              He was not driving.

                                              He was carried around by a self driving car.

                                              I hope that engineers at Uber (and Google, and…) do not need me to note that the very definition of “self driving car” is a huge UI flaw in itself.

                                              That is obvious to anyone who understand UI, UX or even just humans!

                                              1. 5

                                                She was driving . The whole point now of sitting in a driver seat for a TEST self driving car is for the driver to take over and overcome situations like this.

                                                1. 6

                                                  No, she was not.

                                                  Without this incident, you would have seen soon a TV spot precisely with a (hot) business woman looking at the new photos uploaded on Facebook by her family. With a voice saying something like: ’we can bring you to those you Like”.

                                                  The fact that she was paid to drive a prototype does not mean she was an experienced software engineer trained to not trust the AI and to keep continuous control of the car.

                                                  And indeed the software choosed the speed. At that speed the human intervention was impossible.

                                                  Also the software did not deviate, despite the free lane beside and despite the fact that the victim had to traversate that lane, so there was enough time for a computer to calculate several alternative trajectories or even simply to alert the victim via light signaling or sounds.

                                                  So the full responsibility must be tracked back to people at Uber.

                                                  The driver was just fooled to think that he could trust the AI by an stupidly broken UI.

                                                  And indeed the driver/passenger reactions were part of the Uber’s test.

                                                  1. 2

                                                    Looking at your phone while riding in the drivers seat is a crime for a reason. Uber’s AI failed horribly and all their cars should be recalled, but also the driver failed. If the driver had not been looking at their phone literally any action at all could have been taken to avoid the accident. It’s the responsibility of that driver to stay alert with attention on the road not looking at your phone or reading a book or watching a film, plane pilots do it every single day. Is their attention much more diminished? Yes of course it is. Should we expect literally 0 attention from the “driver”, absolutely no we should not.

                                                    1. 5

                                                      Do you realize that the driver/passenger reactions were part of the test?

                                                      This is the sort of self driving car that Uber and friends want to realize and sell worldwide.

                                                      And indeed I guess that the “driver” behaviour was pretty frequent among the prototypes’ testers.

                                                      And I hope somebody will ask Uber to provide in court the recording of all the tests done so far to prove that they did not know drivers do not actually drive.

                                                      NO. The passenger must not be used as a scapegoat.

                                                      This is an engineering issue that was completely avoidable.

                                                      The driver behaviour was expected and desired by Uber

                                                      1. 4

                                                        You’ve gotta stop doing this black and white nonsense. Firstly stop yelling. I’m not using the passenger as a scapegoat so I don’t know who you’re talking to. The way the law was written it’s abundantly clear that this technology is to be treated as semi autonomous. That does not mean that Uber is not negligent. If you are sitting in a driver’s seat and you’re watching harry potter while your car drives through a crowd of people you should be found guilty of negligence independent of any charges that come to both the lead engineers and owners of Uber. You have a responsibility to at least take any action at all to prevent deaths that otherwise may be at no fault of your own. You can’t just lounge back while your car murders people, and in the same respect when riding in the drivers seat your eyes should not be on your phone, period.

                                                        Edit: That image is of a fully autonomous car, not a semi-autonomous car. There is actually a difference despite your repeated protestations. Uber still failed miserably here, and I hope their cars get taken off the road. I know better than to hope their executives will receive any punishment except maybe by shareholders.

                                                        1. -1

                                                          I guess you are not an engineer, Nor a programmer.

                                                          This is simply an engineering view about UI and UX (that actually are part of my daily job).

                                                          There’s no way that a human used to see a car drive correctly for hours will keep continuous control of the car without driving.

                                                          The human brain notoriously does not work that way.
                                                          If I drive I keep continuous attention and control of the car. If somebody else drive, I do not.

                                                          Also I’m stating that Uber was trying to see if people can trust autonomous cars.
                                                          I’m stating that the incindent was not the first time a tester was recorded while looking at the phone during self drive and that Uber knew that and expected that.

                                                          1. 3

                                                            I guess you are not an engineer, Nor a programmer.

                                                            This isn’t the first time you’ve pulled statements out of a hat as if they are gospel truth without any evidence and I doubt it will be the last. I think your argument style is dishonest and for me this is the nail in the coffin.

                                                            1. 0

                                                              I’m not sure I understand what you mean…

                                                              The UI problem is really evident, isn’t it?

                                                              The passenger was not perceiving herself as a driver.

                                                            2. 2

                                                              If there is “no way” a human can do this, then we’ve certainly never had astronauts pilot a tiny spacecraft to the moon without being able to physically change position, and we certainly don’t have military pilots in fighter jets continuously concentrating while refueling in air on missions lasting 12 hours or more… or… or…. truck drivers driving on roads with no one for miles…or…

                                                              Maybe Uber is at fault here for not adequately psychologically screening, and training its operators for “scenarios of intense boredom.”

                                                              1. 0

                                                                You are talking about professionals specifically trained to keep that kind of concentration.
                                                                And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.

                                                                I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.

                                                                It’s just a stupid UI design error. A very obvious one to see and to fix.

                                                                Do you really need some hints?

                                                                1. Remove the car’s control from the AI and turn it into something that enhance the driver’s senses.
                                                                2. Make it observes the driver’s state and forbid to start in case of he’s drunk or too tired to drive
                                                                3. Stop it from starting if any of its part is not working properly.

                                                                This way the responsibility of an incident would be of the driver, not of Uber’s board of directors (unless factory defects, obviously).

                                                                1. 4

                                                                  You’re being adversarial just to try to prove your point, which we all understand.

                                                                  You are talking about professionals specifically trained to keep that kind of concentration. And even a military pilot won’t maintain concentration on the road if her husband is driving and she knows by experience that his trustworthy.

                                                                  A military pilot isn’t being asked (or trained) to operate an autonomous vehicle. You’re comparing apples and oranges!

                                                                  I’m talking about the actual Uber’s goal here, which is to build “self driving cars” for the masses.

                                                                  Yes, the goal of Uber is to build a self driving car. We know. The goal of Uber is to build a car that is fully autonomous; one that allows all passengers to enjoy doing whatever it is they want to do: reading a book, watching a movie, etc. We get it. The problem is that those goals, are just that, goals. They aren’t reality, yet. And, there are laws in which Uber, and its operators must continue to follow in order for any department of transportation to allow these tests to continue–in order to build up confidence that autonomous vehicles are as safe, or (hopefully) safer than already licensed motorists. (IANAL, nor do I have any understanding of said laws, so that’s all I’ll say there)

                                                                  It’s just a stupid UI design error. A very obvious one to see and to fix.

                                                                  So, your point is that the operator’s driving experience should be enhanced by the sensors, and that the car should never be fully autonomous? I can agree to that, and have advocated for that in the past. But, that’s a different conversation. That’s not the goal of Uber, or Waymo.

                                                                  The reason a pedestrian is dead is because of some combination of flaws in:

                                                                  • the autonomous vehicle itself
                                                                  • a distracted operator
                                                                  • (apparently) a stretch of road with too infrequent cross walks
                                                                  • a pedestrian jaywalking (perhaps because of the previous point)
                                                                  • a pedestrian not wearing proper safety gear for traveling at night
                                                                  • an extremely ambitious engineering goal of building a fully autonomous vehicle that can handle all of these things safely

                                                                  … in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…

                                                                  1. 2

                                                                    … in a world where engineering teams use phrases like, “move fast and break things.” I’m not sure what development methodology is being used to develop these cars, but I would wager a guess that it’s not being developed with the same rigor and processes used to develop autopilot systems for aircraft, or things like air traffic controllers, space craft systems, and missile guidance systems…

                                                                    Upvoted for this.

                                                                    I’m not being adversarial to prove a point.

                                                                    I’m just arguing that Uber’s board of directors are responsible and must be accountable for this death.

                                                                    1. 3

                                                                      Nobody here is arguing that the board of directors should not be held accountable. You’re being adversarial because you’re bored is my best guess.

                                                                    2. 2

                                                                      Very well-said on all of it. If anyone is wondering, I’ll even add to your last point what kind of processes developers of things like autopilots are following. That’s things like DO-178B with so much assurance activities and independent vetting put into it that those evaluated claim it can cost thousands of dollars per line of code. The methods to similarly certify the techniques used in things like deep learning are in the protoype phase working on simpler instances of the tech. That’d have had to do rigorous processes at several times the pace and size at a fraction of the cost of experienced companies… on cutting-edge techniques requiring new R&D to know how to vet.

                                                                      Or they cut a bunch of corners hacking stuff together and misleading regulators to grab a market quickly like they usually do. And that killed someone who, despite human factors, should’ve lived if the tech (a) worked at all and (b) evaluated against common, road scenarios that could cause trouble. One or both of these is false.

                                                      2. 2

                                                        I don’t know if you can conclude that’s the point. Perhaps the driver is there in case the car says “I’m stuck” or triggers some other alert. They may not be an always on hot failover.

                                                        1. 11

                                                          They may not be an always on hot failover

                                                          IMO they should be, since they are testing a high risk alpha technology that has the possibility to kill people.

                                                  2. 4

                                                    The car does not share any responsibility, simply because it’s just a thing.

                                                    Nor does Uber, which again is a thing, a human artifact like others.

                                                    Indeed we cannot put in jail the car. Nor Uber.

                                                    The responsibility must be tracked back to people.

                                                    Who is ultimately accountable for the AI driving the car?

                                                    I’d say the Uber’s CEO, the board of directors and the stock holders.

                                                    If Uber was an Italian company, probably the the CEO and the boars of directors would be put in jail.

                                                    1. 3

                                                      Not blaming the victim here

                                                      People often say this when they’re partly blaming the victim to not seem overly mean or unfair. We shouldn’t have to when they do deserve partial blame based on one fact: people who put in a bit of effort to avoid common problems/risks are less likely to get hit with negative outcomes. Each time someone ignores one to their peril is a reminder of how important it is to address risks in a way that makes sense. A road with cars flying down it is always a risk. It gets worse at night. Some drivers will have limited senses, be on drugs, or drunk. Assume the worst might happen since it often does and act accordingly.

                                                      In this case, it was not only a four lane road at night the person crossed: people who live in the area on HN said it’s a spot noticeably darker than the other dark spots that stretches out longer. Implication is that there are other places on that road with with more light. When I’m crossing at night, I do two to three things to avoid being hit by a car:

                                                      (a) cross somewhere where there’s light

                                                      (b) make sure I see or hear no car coming before I cross.

                                                      Optionally, (c) where I cross first 1-2 lanes, get to the very middle, pause for a double check of (b), and then cross next two.

                                                      Even with blame mostly on car & driver, the video shows the human driver would’ve had relatively little reaction time even if the vision was further out than video shows. It’s just a bad situation to hit a driver with. I think person crossing at night doing (a)-(c) above might have prevented the accident. I think people should always be doing (a)-(c) above if they value their life since nobody can guarantee other people will drive correctly. Now, we can add you can’t guarantee their self-driving cars will drive correctly.

                                                      1. 2

                                                        Well put. People should always care about their own lifes.
                                                        And they cannot safely assume that others will care as much.

                                                        However note that Americans have learned to blame “jaywalking” by strong marketing campaigns after 1920.

                                                        Before, the roads were for people first.

                                                        1. 2

                                                          I just saw a video on that from “Adam Ruins Everything.” You should check that show out if you like that kind of stuff. Far as that point, it’s true that it was originally done for one reason but now we’re here in our current situation. Most people’s beliefs have been permanently shaped by that propaganda. The laws have been heavily reinforced. So, our expectations of people’s actions and what’s lawful must be compatible with those until they change.

                                                          That’s a great reason to consider eliminating or modifying the laws on jaywalking. You can bet the cops can still ticket you on it, though.

                                                      2. 3

                                                        In reality, both the pedestrian and the car (and Uber) share some responsibility.

                                                        I’ve also seen it argued (convincingly, IMO) that poor civil engineering is also partially responsible.

                                                      3. 3

                                                        And every single thing you listed is mitigated by just slowing down.

                                                        Camera feed getting fuzzy ? Slow down. Now you can get more images of what’s around you, combine them for denoising, and re-run your ML classifiers to figure out what the situation is.

                                                        ML don’t just classify what’s in your sensor feeds. They also give you numerical measures for how close your feed is to the data they previously trained on. When those measures decline,, it could be because the sensors are malfunctioning. It could be rain’/dust/etc. It could be a novel untrained situation. Every single one of those things can be mitigated by just slowing down. In the worst case, you come to a full stop and tell the rider he needs to drive.

                                                      1. 5

                                                        Interesting example given in haskell about type system complexity:

                                                        length (1, 2)  --> 1    wut?
                                                        length (1, 2, 3)  --> *incomprehensible error* 
                                                        
                                                        1. 4

                                                          FWIW, this is all caused by a Foldable ((,) a) instance that is already quite controversial in the Haskell community1. It isn’t the only controversial Foldable instance either - did you know there is a Foldable (Either a)2?

                                                          The main friction is that removing instances that were previously there may cause code that currently compiles to stop compiling. One suggestion I personally like is to have a compiler warning for the pathological cases3.

                                                          1. 3
                                                            <interactive>:6:1: error:
                                                                • No instance for (Foldable ((,,) t0 t1))
                                                                    arising from a use of ‘length’
                                                            

                                                            — what’s incomprehensible about this?

                                                            1. 4

                                                              Hmm well this is what I got, which is pretty incomprehensible to someone starting out with haskell I think.

                                                              <interactive>:4:1: error:
                                                                  • No instance for (Foldable ((,,) t0 t1))
                                                                      arising from a use of ‘length’
                                                                  • In the expression: length (1, 2, 3)
                                                                    In an equation for ‘it’: it = length (1, 2, 3)
                                                              <interactive>:4:9: error:
                                                                  • Ambiguous type variable ‘t0’ arising from the literal ‘1’
                                                                    prevents the constraint ‘(Num t0)’ from being solved.
                                                                    Probable fix: use a type annotation to specify what ‘t0’ should be.
                                                                    These potential instances exist:
                                                                      instance Num Integer -- Defined in ‘GHC.Num’
                                                                      instance Num Double -- Defined in ‘GHC.Float’
                                                                      instance Num Float -- Defined in ‘GHC.Float’
                                                                      ...plus two others
                                                                      ...plus three instances involving out-of-scope types
                                                                      (use -fprint-potential-instances to see them all)
                                                                  • In the expression: 1
                                                                    In the first argument of ‘length’, namely ‘(1, 2, 3)’
                                                                    In the expression: length (1, 2, 3)
                                                              <interactive>:4:11: error:
                                                                  • Ambiguous type variable ‘t1’ arising from the literal ‘2’
                                                                    prevents the constraint ‘(Num t1)’ from being solved.
                                                                    Probable fix: use a type annotation to specify what ‘t1’ should be.
                                                                    These potential instances exist:
                                                                      instance Num Integer -- Defined in ‘GHC.Num’
                                                                      instance Num Double -- Defined in ‘GHC.Float’
                                                                      instance Num Float -- Defined in ‘GHC.Float’
                                                                      ...plus two others
                                                                      ...plus three instances involving out-of-scope types
                                                                      (use -fprint-potential-instances to see them all)
                                                                  • In the expression: 2
                                                                    In the first argument of ‘length’, namely ‘(1, 2, 3)’
                                                                    In the expression: length (1, 2, 3)
                                                              
                                                              1. 5

                                                                Yeah, sadly GHC error messages are pointlessly hard to read.

                                                                The first should just say “There is no instance of Foldable for (a,b,c)”.

                                                                The other two are very standard messages you’ll see all the time. You don’t even need to read them. Actually, GHC should be taught to simply not produce them in cases like this. They’re a consequence of the previous error. GHC should be printing something like “I don’t know what type to assign to literal ‘1’ because there are no constraints on it. If there are other type errors fixing them may add additional constraints. If not, annotate the literal with a type like (1::Int)”.

                                                                Basically, 1, doesn’t mean much. It could be an integer, a double, a km/s, a price, the unit vector, etc. As long as the type has a Num instance available 1 can be converted to it. Since the type controls the behavior of that object you need to know what it is before you can run the code.

                                                                1. 3

                                                                  I agree that the amount of information that GHC outputs is overwhelming (the GHC typechecker looks to me like a complicated solver environment that might be better served by its own interactive mode and type-level debugger, Coq-style). On the other hand, the source of the error is clearly written in two lines at the start of the message, that’s why it’s hardly “incomprehensible”.

                                                                  1. 2

                                                                    To me this looks like the unreadable messages from g++. You just have to learn to read through it!

                                                                  2. 3

                                                                    For someone new to Haskell, the notion that – were you to invest the time to learn, this would be a readable message – is hard to fathom. I think that sort of imagination barrier is why generally things with steep learning curves are less popular.

                                                                    1. 9

                                                                      There’s nothing Haskell-specific about bad error messages. And it’s nothing to do with imagination or the learning curve of Haskell. It’s just the obtuse way that GHC error messages are written and the lack of interest to make them better.

                                                                      If this message said “(,,) is not an instance of Foldable” no one would find it difficult to comprehend.

                                                                      That being said. The tuple instance of Foldable is really horrible and confusing. Either length shouldn’t be in foldable and we should use some other concept (slotness or something) or that instance shouldn’t be there by default. But this has nothing to do with Haskell or type systems. It’s just as if Java had created a terrible misnamed class that gave you the wrong answer to an obvious query.

                                                                1. 8

                                                                  We call artificial neural networks a class of deterministic algorithms that can statistically approximate any function

                                                                  they are just applied statistics, not an inscrutable computer brain

                                                                  The counterpoint to this is we don’t actually know yet if our brain isn’t just a mechanism that can statistically approximate any function. The difference of course is that even if brains were analogous to neural nets, which we currently do not actually know enough to say either way, the complexity is just not there. The AI are like a guppy or a tadpole, very very good at some specific task like swimming, but they aren’t doing any “thinking” as we do because they simply are nowhere near complex enough.

                                                                  I’m not saying our brains are analogous to neural nets, I am saying we don’t actually know enough to to pinpoint the importance of structural covariance of human brains. The structure could be entirely where the intelligence comes from, or it could be very little. The important thing instead of saying it’s not a computer brain, is to say that it’s more like reflexes. Completely unconscious, but potentially very skilled. This will help prevent people doubting your “not an inscrutable computer brain” claim, because when something does a task better than them they’re going to think it’s smarter than them, when really for that AI it’s more of a reflex.

                                                                  1. 2

                                                                    it’s more like reflexes. Completely unconscious but potentially very skilled

                                                                    You’re making a good point but I stumbled on this bit. Intelligence and consciousness are potentially two very different things, so that’s an entirely different line of enquiry. It might be worth framing this as “very skilled, but in a very small set of tasks” instead.

                                                                    1. 1

                                                                      It could be very skilled in a very large set of tasks and still be totally incapable of metacognition.

                                                                      1. 2

                                                                        Yes, we are in agreement. I was just trying to say that it’s better to talk about (current) AI purely in terms of skill and intelligence, as bringing consciousness into it complicates things and is an entirely separate discussion.

                                                                        1. 1

                                                                          It’s what the original author was doing whether they were trying to or not and what I was responding to.

                                                                    2. 1

                                                                      Thanks for your advice!
                                                                      I get your point, but I do not think that AI is like a reflex, since a reflex needs way less data to train.

                                                                      While it’s true we know near to nothing about how our brain actually work, it seems not a statistical tool given how few attempts we need to learn something.

                                                                      However we are doing some progress in our understandings.
                                                                      Here an interesting article about the topic. I strongly suggest you to follow the links there: the article is nice, but the linked sources are great!

                                                                      1. 6

                                                                        I would not be so confident in saying the brain takes little data to learn. Take human development for example. Babies take well over a year consuming a constant stream of experience (unlabeled training data) to become competent enough to even perform simple actions.

                                                                        In my opinion, learning probably seems to occur quickly once the brain has matured a bit and has built a sufficiently large set of lower-level concepts. Such that new high-level concepts can be reasonably represented by a subset of the previously understood lower-level concepts working in tandem.

                                                                        1. 2

                                                                          some years of human experience is little compared to the huge amounts of data necessary to build a competent AI. to make a comparison you have to decide how to measure the data of human experience, but the amount that enters your perception is much smaller than e.g. recorded HD footage.

                                                                          1. 5

                                                                            That’s simply untrue. One eye alone has roughly a resolution of 576 megapixels, 4K is 8.3 megapixels. 1 inch of skin has on average 19,000 sensory cells. Also keep in mind the human brain has orders and orders and orders and orders of magnitude more complexity. It can afford relate each thing to everything instead of having the kind of amnesia that even our most advance neural networks have. It allows for much more complex patterns to be formed much faster.

                                                                            1. 3

                                                                              Yes, but the brain discards most of these informations and apparently the visual cortex works as a sort of lossy compression filter.

                                                                              1. 2

                                                                                This is some significant hand waving. Does the brain filter? no doubt it would not be able to pay attention to specific things if it didn’t. Does it also process the entire visual field? How else could it find some specific feature. Keep in mind the brain can identify when around 9 photons hit the eye within less than 100 ms. One study recently claims to confirm with significance that humans can see a single photon, but the study was small so maybe just those people can. Either way, I’m going to call bullshit on that, it is not smaller than recorded HD footage.

                                                                                1. 3

                                                                                  Does it also process the entire visual field? How else could it find some specific feature.

                                                                                  You’re falling for your brain’s convincing suggestion that you have full HD in your entire visual field. Really you don’t, and your brain just fills most of it in. Your visual system finds specific features by quickly shifting the eye from place to place, until it finds something worth looking at. That’s how it picks out features without processing the entire visual field.

                                                                                  Your peripheral vision has very poor color and shape detection. Mostly it has special cells designed to detect motion, and when it detects motion you often shift your focus to it, thus picking up the color and shape.

                                                                                  The fact that you can perceive a few photons hitting the eye within 100 ms is a matter of sensitivity and latency; it has no bearing on the data bandwidth of your visual system.

                                                                                  This video might help:

                                                                                  https://www.youtube.com/watch?v=fjbWr3ODbAo&t=8m38s

                                                                                  1. 1

                                                                                    Even 1% of my vision has more complexity than HD.

                                                                                    1. 2

                                                                                      Based on what? Your fovea covers only about 2 degrees of the visual field. The visual field is about 75 degrees in either direction. The width of the foveal part of the visual field is less than 1/10 the width of your entire visual field, so the foveal part is much less than 1%. If you view an HD screen from far enough away that it appears the same size as your thumbnail at arm’s length, can you distinguish every pixel?

                                                                                      https://en.wikipedia.org/wiki/Fovea_centralis#Function https://en.wikipedia.org/wiki/Visual_field#Normal_limits https://en.wikipedia.org/wiki/Fovea_centralis#Angular_size_of_foveal_cones

                                                                                      1. 1

                                                                                        I’m realizing now we’re talking about two entirely different things. I’m talking about the complexity of input you’re talking about the complexity of perception. The former is measurable, and the latter is very nebulous at best.

                                                                                        1. 2

                                                                                          What’s the difference between input and perception? If you don’t perceive something why would it be considered input?

                                                                                  2. 2

                                                                                    You’re entirely right. His statements are totally wrong. A recent-ish article that does a decent job of describing at least part of the story https://www.sciencedirect.com/science/article/pii/S089662731200092X

                                                                                    1. 1

                                                                                      Can you explain the relevance of that article? It didn’t seem to say much about the amount of sensory data that enters our perception, based on the abstract.

                                                                                      1. 1

                                                                                        It’s because the original poster said

                                                                                        Yes, but the brain discards most of these informations and apparently the visual cortex works as a sort of lossy compression filter.

                                                                                        That’s just not how human vision works.

                                                                                        You are right though. You only have high acuity vision in the fovea. But notions of resolution don’t map well to human vision. It’s also just not a thing that’s worth debating. Much better to discuss questions of “how much information do you need to recognize X” (generally very little, human vision works well with very small images) than “how many bits per second are coming in”. The second is ill-defined in any case. If I have 20/20 vision does it really mean that it’s good to think of that as I see HD video and someone else sees SD video? Not really. It just doesn’t answer any useful questions about human vision.

                                                                                        1. 2

                                                                                          The right questions to ask depend on what you’re interested in. Asking how many bits are coming in isn’t useful for the study of human vision, but it is useful if you’re trying to relate AI to the human mind. Humans “learn” things with much less data than machines, because they have innate capacities built in, which (so far) are too complex to build into a machine learning algorithm. This has been well established since psychology’s departure from behaviorism, but tech people tend to forget it when comparing the brain to computers. Granted there’s no way to determine how many bits are entering your mind, but with enough understanding of visual perception I think we can make the judgement call that you get less data than full-color HD video. Understanding that highlights how little data humans require to learn things about the world.

                                                                                          I’m also not convinced that lossy compression is not a good metaphor for human vision. Clearly we’re not using mp4 or mkv, but if you take a wider view of the concept of lossy compression, it makes sense.

                                                                                          1. 1

                                                                                            It’s actually not a good proxy for comparing humans to machines either. Far more important than # of bits in is what kind of data you’re getting. For example, data where you have some control (like you get to manipulate relevant objects) seems to be far more important for humans. The famous sticky mittens experiments show this very nicely.

                                                                                            In any case, HD video is mostly irrelevant for actual AI. Most vision algorithms use fairly small images because it’s better to have lots of processing over smaller images than less processing over bigger ones.

                                                                                            I think it’s worth separating “tech people” from people that actually do AI / CV / ML. People that work on these topics aren’t being confused by this. There’s a big push in CV and NLP to try to include semantically relevant features.

                                                                                            It’s worth reading the article I linked to. Human vision is not lossy compression and this model doesn’t fit the data that we have from either human behavior or from neuroscience. Once upon a time people thought this but those days are long long over.

                                                                          2. 5

                                                                            A reflex takes millions of years to train. You aren’t taking into account the entire lifespan of the human. They are using other patterns to infer the present context.

                                                                            I’m not saying we are a statistical tool. I’m saying we can’t actually tell that we aren’t with confidence just like we can’t tell that we are with confidence. I’ll read the links.

                                                                            1. 2

                                                                              It seems like additional complexity implies that each neuron itself approximates a smallish neural network. This doesn’t really change much outside of the obvious complexity growth and design considerations.

                                                                              1. 2

                                                                                Disclaimer: I’m a programmer, not a biologist.

                                                                                The fact that biological neurons exchange RNA genetical code looks like something that no artificial neural network can do. If I understand this correctly, this means that each neuron can slowly program the others.

                                                                                Still you can see in the slides that this is not something I base my reasoning upon.
                                                                                I’ve just thought the article could have been interesting to you, given your reasoning about reflexes. :-)

                                                                                1. 2

                                                                                  I’m also a programmer and not a biologist :V. However complexity theory hints at the possibility that simple setups can lead to emergent complexity that approaches the complexity of a system with more complex agents. Basically the complexity as the system grows exceeds the additive complexity of the individual agents.

                                                                                  I think it’s totally reasonable to say that a node and edge cluster does not anywhere near approach the complexity of an individual neuron. You however can’t use that information to then say that a neural network isn’t able to achieve the same level of complexity or cognition. Programming also isn’t different from any function which takes arbitrary many inputs, which we already know that NN’s can approximate.

                                                                                  That is of course not to say that it CAN do all the above, merely to say that we should be cautious of any claims that are conclusive either way talking about the future.

                                                                                  1. 2

                                                                                    Nice! You are approaching one of the core argument in my talk! :-D

                                                                                    Programming also isn’t different from any function which takes arbitrary many inputs, which we already know that NN’s can approximate.

                                                                                    No AI technique that I know, neither supervised nor unsupervised nor based on reinforcement learning, can remotely approach a function that produce functions as output.

                                                                                    For sure, no technique based on artificial neural networks: there is no isomorphism between the set of the outputs of the continuous functions they can approximate (aka ℝ) and the set of functions. So whatever the size of your Deep Learning ANN, no current technique can produce an intelligence, simply because an ANN cannot express a function through its output.

                                                                                    It’s pretty possible that, a couple of centuries from now, we will be able to build an artificial general intelligence, but the techniques we will use will be completely different from the one we use now.
                                                                                    More, I guess that the role played by ANN will be peripheral, if not marginal.

                                                                                    That’s the worse threat that the current marketing hype pose to AI.
                                                                                    It’s the most dangerous.

                                                                                    Eager to attract funds, most of the research community is looking in the wrong direction.

                                                                                    1. 1

                                                                                      There’s a difference between no current technique can ever produce vs no technique has produced. It’s not impossible that a dumb technique could have complex consequences as the complexity increases. Sure we may not see it in our lifetime, but I think it’s very premature to call it a dead root for intelligence. We should definitely also travel down other paths, but to call it a dead end I think is severely jumping the gun.

                                                                                      1. 3

                                                                                        Just to clear something up since this person is still spreading FUD. There’s plenty of AI that deals with program induction, i.e., ML that leans functions. There’s a lot of NN work on this now. Just search program induction neural networks on google scholar.

                                                                                        The world is full of these people that don’t know anything about a topic but think they’re the next messiah and they’re the only ones that see the truth. My physicists and historian friends always complain about the crackpots they have to fight. Guess it’s the turn of AI folks!

                                                                                        1. 1

                                                                                          ROFTL! :-D

                                                                                          Thanks for the suggestion.

                                                                                          I know nothing about program induction and will surely study the papers I will find on Google scholar.

                                                                                          I suggest you to open your mind too.

                                                                                          I do not pretend to know something I don’t.

                                                                                          But if one tell me that current computers are not deterministic, I can’t help but doubt about his understanding of them.

                                                                                          I guess you have never debugged a concurrent multithreaded program.

                                                                                          I did.
                                                                                          And I’ve also debugged multi processor concurrent kernel schedulers.

                                                                                          Trust me: they are buggy but still deterministic.

                                                                                          They just look crazy and non deterministic if you do not understand the whole input of your program, that include time for example.

                                                                                          The input of a program is everything that affect its computation.

                                                                                          Also assuming that I am spreading FUD, make it impossible to address the real issues in my slides.

                                                                                          I may suspect that you are spreading hype. ;-)

                                                                                          But I ’m still eager of links and serious objections.

                                                                                          Because I want to know. I want to learn.
                                                                                          This requires the acceptance of one’s ignorance.

                                                                                          Do you wamt to learn too? ;-)

                                                                                          1. 1

                                                                                            RE determinism: I’ve heard (from a reputable source) that you can get a pretty good entropy by turning up the gain on an un-plugged microphone (electrons tunnel about, causing just enough voltage fluctuations to hiss).

                                                                                            Would love to have a reason to need it…

                                                                                            1. 1

                                                                                              What about GPG key generation?

                                                                                              1. 2

                                                                                                I can get enough entropy for that by mashing my keyboard and waving the mouse about. I’d combine the soundcard approach with a hsm if I needed a lot of entropy on a headless box and didn’t want to trust the hsm vendor.

                                                                                          2. 1

                                                                                            Thanks for being a voice of reason about all this, I honestly don’t have enough domain experience to really hold ground on “Lets not jump to conclusions”.

                                                                          1. 8

                                                                            A huge number of issues. I work on this and sadly this is spreading a lot of misinformation.

                                                                            “We call artificial neural networks a class of deterministic algorithms that can statistically approximate any function”

                                                                            This isn’t true for many reasons.

                                                                            There are many NN approaches that are not deterministic. Actually, there are NN approaches that rely on noise.

                                                                            Also, there are other algorithms that can approximate any function. And single-layer networks are ANN but they can’t approximate any function.

                                                                            I’m also not sure what “statistically approximate” means.

                                                                            their output can always be explained (till quantum computing)

                                                                            I don’t know what this means. QC can also be explained just fine. It’s a bunch of linear transformations. That’s not what people mean when they say explain.

                                                                            there is no way to prove they are approximating a specific discrete function

                                                                            I don’t know what this means. I can make lots of continuous functions that will be epsilon within whatever discrete function you pick.

                                                                            AI is not accountable, so it cannot take decisions over humans

                                                                            Maybe shouldn’t? But it certainly can.

                                                                            The whole “Function” slide is needlessly complex. I wouldn’t let my students put that up.

                                                                            If you suspect that a function exists

                                                                            I don’t know what this means. If functions are just maps what do you mean that a function doesn’t exist?

                                                                            you can try to statistically approximate it with a neural network

                                                                            “statistically approximate it” doesn’t mean anything.

                                                                            This is the strongest strength of neural networks.

                                                                            Other models can approximate any function. That’s not the magic of NNs.

                                                                            we need a big data set to filter out unwanted functions with each sample we feed to it.

                                                                            Except that this is really bad intuition. NNs seem to overfit much less than people expect. Folks train networks with more parameters than data points and they still seem to generalize to new data.

                                                                            Still, infinitely many functions fit our samples!

                                                                            Eh. That’s always true. Infinitely many graphical models will fit something…

                                                                            We can not really know which function a complex ANN will approximate.

                                                                            I don’t even know what to make of this. Is this a statement that you don’t know what your network will learn? Well.. ok. I mean, that’s the point of training something isn’t it?

                                                                            Can we move from narrow intelligence to general intelligence?

                                                                            Domain and Codomain depends on “hardware”, intelligence does not

                                                                            What? Brains take input. They produce outputs.

                                                                            Domain and Codomain are (potentially) infinite sets

                                                                            So? RNNs can make any number of sequences of tokens.

                                                                            No equality relation in the Codomain

                                                                            shrug Who cares?

                                                                            The whole domain -> codomain thing being transitioned into perception -> action isn’t deep at all. Heck, people have been training models to do that for decades. I even do that.

                                                                            I’m not even going to touch the whole knowledge section. It’s not even wrong. It’s naive. This isn’t how people conceive of models or knowledge in cognitive psychology, neural networks, linguistics or philosophy.

                                                                            (to prove) to be general, an Artificial Intelligence should be able to discover and explain us new abstractions and functions over them

                                                                            Eh.. Kittens have general intelligence. Good luck getting them to do any of this! This is a deep question, how do you know something is intelligent. But this isn’t one of the answers.

                                                                            Artificial General Intelligence is Artificial Super Intelligence!

                                                                            Oh God… There are so many constraints on intelligence. The speed of light (which puts constraints on how big a chip can be and how big your brain can be). The speed of chemical reactions. The energy density at which your CPU melts into a heap / your brain swells / you run out of food, etc. The idea that AI is automatically superintelligent is pop science.

                                                                            So, where is the intelligence?

                                                                            This was actually ok! I wouldn’t say it that way but it’s not objectionable!

                                                                            I stopped here. :(

                                                                            1. 3

                                                                              Yes, I knew something was up with the certainty level the author had with the evidence they were proposing. Obviously I’m a novice, but I have some degree of intuition about the things we don’t or can’t know yet. Saying with certainty that neural nets will never accomplish intelligence reeks of anthropocentric bias even if it were true.

                                                                              1. 1

                                                                                I don’t think the author necessarily implies that. It would be kind of a crazy thing to claim that something that we vaguely have a handle on can or can’t do something we can’t even define or agree about when we observe.

                                                                                1. 1

                                                                                  That’s very generous of you.

                                                                                  1. 1

                                                                                    I’m kind of a crazy guy. :-)

                                                                                    As said, I’m not an expert in statistics.

                                                                                    And you are right, many AI researchers vaguely have an handle on ANN.

                                                                                    As for intelligence, maybe we cannot agree, but for sure we can define it. There are several definitions actually. Legg and Hutter describe some of them.

                                                                                    I propose my own, as a composition of few other functions. It has some advantages over other definitions and obviously it has some disadvantages.

                                                                                    For sure it shows how far we are from AGI. Is it an advantage? Boh! :-D

                                                                                2. 1

                                                                                  Huh! Thanks for your answer and sorry if I you feel somehow sad about this!

                                                                                  I think that most of your objections come from a shallow read of the slides. But I can ensure you that other colleagues, that also work with ANN in particular, find them pretty well founded and clear.

                                                                                  Here I pick and answer only to those of your objections that are actually connected to what is meant there.

                                                                                  There are many NN approaches that are not deterministic. Actually, there are NN approaches that rely on noise.

                                                                                  False. Computers are deterministic.

                                                                                  When you add true entropy to an algorithm that run on a computer, you randomize the input, not the algorithm.

                                                                                  The random contribution to the computation can be recorded so that the computation can replicated.
                                                                                  And you will always get the same outputs.

                                                                                  Obviously, to do that, you must have a clear understanding of what your input is.
                                                                                  Obviously, if you forget a piece (the random part) you cannot reproduce your own results!
                                                                                  But if so… are you sure you are working in the field?

                                                                                  I’m also not sure what “statistically approximate” means.

                                                                                  Nothing exotic: “statistically approximate” means to approximate through the use of statistics.

                                                                                  An ANN is a statistical algorithm. Just like a K-mean is.
                                                                                  It’s obvious that it’s statistics: you can only use it if you have tons of data.

                                                                                  Had you read all the slides you had see that I explain how the fancy names we are using are good for literature (and business) but not for science. They fool the experts too!

                                                                                  we need a big data set to filter out unwanted functions with each sample we feed to it. … We can not really know which function a complex ANN will approximate.

                                                                                  I don’t even know what to make of this. Is this a statement that you don’t know what your network will learn? Well.. ok. I mean, that’s the point of training something isn’t it?

                                                                                  You have been fooled by the language you use: ANNs do not learn anything. They approximate.

                                                                                  I mean that underfit and overfit are two faces of the same medal: you do not know which one, of the infinitely many functions that fit your dataset, your ANN will approximate.

                                                                                  If you are lucky it start to approximate a function that is similar to the one you actually desire.

                                                                                  Otherwise it start to approximate a function that works well in the region of your training dataset but not outside (overfit) or it does not even fit the whole training dataset well enough (underfit).

                                                                                  AI is not accountable, so it cannot take decisions over humans

                                                                                  Maybe shouldn’t? But it certainly can.

                                                                                  How? We put it in jail? We turn it off? How?

                                                                                  Are you sure you work in the field? Oh… yes I can see… you are! ;-)

                                                                                  No equality relation in the Codomain

                                                                                  shrug Who cares?

                                                                                  The child killed by a self driving car that turned left by 3 cm.

                                                                                  It turned 3 cm to the left just like every other time. Just in the wrong place at the wrong time.

                                                                                  I’m not even going to touch the whole knowledge section. It’s not even wrong. It’s naive.

                                                                                  Oh this is a good objection! Can you back it with some papers or even book we can read?

                                                                                  The question is not rhetorical. I would really appreciate such links.

                                                                                  (to prove) to be general, an Artificial Intelligence should be able to discover and explain us new abstractions and functions over them

                                                                                  Eh.. Kittens have general intelligence. Good luck getting them to do any of this!

                                                                                  Ehm… funny you talk about kittens… :-)

                                                                                  Did you see the cat in the slides. Did you understand what it means?

                                                                                  You are so good at pattern matching that you see a cat even if you know that there is no cat there.

                                                                                  The same happens when we see a neural network at work: we see an intelligence, but there is no intelligence there.

                                                                                  It’s also what happened with the first train films of the Lumiere’s brothers.
                                                                                  People saw a train coming towards them, but there was no train.

                                                                                  And it’s the same with kittens.
                                                                                  You see an intelligence because you project your own experience to explain his behavior.

                                                                                  But there is no intelligence there.

                                                                                  The idea that AI is automatically superintelligent is pop science.

                                                                                  First I was not talking about Artificial Intelligence, but Artificial General Intelligence.

                                                                                  But, had you read more carefully you would have understood that saying that AGI is ASI is an obvious effect of the definition of AGI that I propose.

                                                                                  In particular, to be general it must be able to abstract. That means to be able to identify concepts and functions on its own.

                                                                                  Now, we as human, first react to perceptions then learn from them. Sometimes days later.
                                                                                  This is an effect of our biological evolution. But it also means that our reactions are always suboptimal when we face an event that we cannot explain because it contradicts our knowledge and predictions.

                                                                                  A machine would not have this limit. It can integrate the new perception in its knowledge first and use the new knowledge to react. This give it an edge over humans.

                                                                                  So when we will create an artificial general intelligence, we will create an intelligence with an edge over us.

                                                                                  1. 3

                                                                                    sorry if I you feel somehow sad about this!

                                                                                    I feel very sad. I fight against such misinformation all the time. If you had someone with a PhD in machine learning look this over and they were ok with it, I fear for the state of our field.

                                                                                    False. Computers are deterministic.

                                                                                    Ugh. No. That’s silly. If you can produce noise indistinguishable from uniform noise then all of this is an irrelevant detail. It makes zero difference if I happen to hook up a better source of entropy to my computer or not.

                                                                                    This shows that you don’t understand what’s going on here from a mathematical point of view, a recurrent theme on these slides. It leads to a lot of needless confusion.

                                                                                    Nothing exotic: “statistically approximate” means to approximate through the use of statistics.

                                                                                    It doesn’t. That’s just not how people use the terminology. By teaching your audience bad and confusing terms you make it harder for them to communicate with anyone. That’s a meaningless phrase.

                                                                                    It’s obvious that it’s statistics: you can only use it if you have tons of data.

                                                                                    This is wrong on many levels. Statistics has 0 to do with large amounts of data. Some graphical models have a lot of free parameters and some require 0 training.

                                                                                    An ANN is a statistical algorithm

                                                                                    This is a meaningless statement. I can use methods from a field to analyze an algorithm, but then I can use whatever methods I feel like from any field. We can talk about deterministic algorithms or non-deterministic algorithms or randomized algorithms, etc. Each of these have a very technical meaning and none of these terms mean what you refer to with the words “statistical algorithm”.

                                                                                    that I explain how the fancy names we are using are good for literature (and business) but not for science.

                                                                                    The fancy names we use are fine for science. It’s just that you’re misusing them as I’ve pointed in a numerous places.

                                                                                    ANNs do not learn anything. They approximate

                                                                                    This is what I mean. I haven’t been fooled. You’re coming up with your own nonsense terminology that no one in ML or AI uses because you object to something for some vague philosophical reasons. “Learn” has a technical and mathematical meaning in ML that everyone understands.

                                                                                    I mean that underfit and overfit are two faces of the same medal

                                                                                    This is nonsense. The two happen for totally different reasons in different models and you do different things when they happen. It sounds nice, but it’s useless.

                                                                                    AI is not accountable, so it cannot take decisions over humans Maybe shouldn’t? But it certainly can. How? We put it in jail? We turn it off? How? Are you sure you work in the field? Oh… yes I can see… you are! ;-)

                                                                                    You said that AI can’t take decisions over humans and I said that maybe it shouldn’t but it actually is. I don’t see how putting it in jail has anything to do with that.

                                                                                    And now I’m done. If you’re going to insult the people that actually do the things that you purport to “explain” without actually understanding anything I’m out of here.

                                                                                    But I can tell you. You are doing your audience and everyone that happens to listen to you a massive disservice by spreading incorrect terminology, bad ideas, and just all around ignorance.

                                                                                    1. 2

                                                                                      sorry if I you feel somehow sad about this!

                                                                                      I feel very sad. I fight against such misinformation all the time. If you had someone with a PhD in machine learning look this over and they were ok with it, I fear for the state of our field.

                                                                                      False. Computers are deterministic.

                                                                                      Ugh. No. That’s silly. If you can produce noise indistinguishable from uniform noise then all of this is an irrelevant detail. It makes zero difference if I happen to hook up a better source of entropy to my computer or not.

                                                                                      This shows that you don’t understand what’s going on here from a mathematical point of view, a recurrent theme on these slides. It leads to a lot of needless confusion.

                                                                                      Nothing exotic: “statistically approximate” means to approximate through the use of statistics.

                                                                                      It doesn’t. That’s just not how people use the terminology. By teaching your audience bad and confusing terms you make it harder for them to communicate with anyone. That’s a meaningless phrase.

                                                                                      It’s obvious that it’s statistics: you can only use it if you have tons of data.

                                                                                      This is wrong on many levels. Statistics has 0 to do with large amounts of data. Some graphical models have a lot of free parameters and some require 0 training.

                                                                                      An ANN is a statistical algorithm

                                                                                      This is a meaningless statement. I can use methods from a field to analyze an algorithm, but then I can use whatever methods I feel like from any field. We can talk about deterministic algorithms or non-deterministic algorithms or randomized algorithms, etc. Each of these have a very technical meaning and none of these terms mean what you refer to with the words “statistical algorithm”.

                                                                                      that I explain how the fancy names we are using are good for literature (and business) but not for science.

                                                                                      The fancy names we use are fine for science. It’s just that you’re misusing them as I’ve pointed in a numerous places.

                                                                                      ANNs do not learn anything. They approximate

                                                                                      This is what I mean. I haven’t been fooled. You’re coming up with your own nonsense terminology that no one in ML or AI uses because you object to something for some vague philosophical reasons. “Learn” has a technical and mathematical meaning in ML that everyone understands.

                                                                                      I mean that underfit and overfit are two faces of the same medal

                                                                                      This is nonsense. The two happen for totally different reasons in different models and you do different things when they happen. It sounds nice, but it’s useless.

                                                                                      AI is not accountable, so it cannot take decisions over humans

                                                                                      Maybe shouldn’t? But it certainly can.

                                                                                      How? We put it in jail? We turn it off? How? Are you sure you work in the field? Oh… yes I can see… you are! ;-)

                                                                                      You said that AI can’t take decisions over humans and I said that maybe it shouldn’t but it actually is. I don’t see how putting it in jail has anything to do with that.

                                                                                      And now I’m done. If you’re going to insult the people that actually do the things that you purport to “explain” without actually understanding anything I’m out of here.

                                                                                      But I can tell you. You are doing your audience and everyone that happens to listen to you a massive disservice by spreading incorrect terminology, bad ideas, and just all around ignorance.

                                                                                      1. 2

                                                                                        False. Computers are deterministic.

                                                                                        Ugh. No. That’s silly. If you can produce noise indistinguishable from uniform noise then all of this is an irrelevant detail. It makes zero difference if I happen to hook up a better source of entropy to my computer or not.

                                                                                        Dude, until the advent of quantum computing, computers will be deterministic machines.

                                                                                        Their output can always be reproduced from their input.

                                                                                        I’m afraid for your students if you do not understand this.

                                                                                        If you ignore part of the input (eg the time at which concurrent events occur, or the noise you use, or a random seed, or anything else that affects the execution) of your algorithm you can fool yourself to think that it’s not deterministic.

                                                                                        But the algorithm is deterministic anyway. You just need a crash course in debugging.

                                                                                        If you’re going to insult…

                                                                                        Your first line, in your first response was

                                                                                        A huge number of issues. I work on this and sadly this is spreading a lot of misinformation.

                                                                                        Maybe it was unintended, but it didn’t sound much polite to me… ;-)

                                                                                        I have no intention to insult anybody. And, as I wrote in the slides, I’m not an expert in statistics.

                                                                                        I’m very open to learn from you if you have something to teach.
                                                                                        But you shouldn’t assume you can bullshit me with a lot of vague objections.

                                                                                        For example you said that my definition of knowledge is naive. Fine!
                                                                                        Please provide alternative definitions! I’m eager to read them and study them in the documents you will propose. Really!

                                                                                        You say that neural networks are not statistical tools.
                                                                                        I can understand you want to distinguish your field of competence (and your marketing segment) from that of statisticians, but I argue that ANNs are statistical tools. Just like K-mean clustering are. And the rest of ML, for what it worth.

                                                                                        Indeed, according to Wikipedia:

                                                                                        Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data.

                                                                                        Guess what? It’s just what you do with ANNs! You analyze and organize data.

                                                                                        By considering ML and ANNs as simple statistical tools the whole field will progress faster!

                                                                                        You say that “training” is a good term for science.

                                                                                        I argue that it’s too anthropomorphic: you do not train anyone, you just calibrate an algorithm!
                                                                                        Indeed you establish weights!

                                                                                        You do not create an “artificial intelligence”, you just “simulate an intelligence”. And so on…

                                                                                        To me, all this hype about AI seems a huge collective hallucination, that hurts the research itself, because fools researchers.

                                                                                        1. 1

                                                                                          until the advent of quantum computing, computers will be deterministic machines

                                                                                          a year working on a project that deploys exactly the same code to 1000 identical hosts will smash this preconception, at least in a practical sense. even code that is intended to be discreet will sometimes pick up stochastic meta-inputs. there is much zen in treating all code as probabilistic. any proof based on a “perfect” computation is theoretical at best.

                                                                                          1. 1

                                                                                            I can feel your pain, really!

                                                                                            Actually I work with with a couple of systems deployed into a bit less of 2000 eterogenous machines which include clumpsy stacks (you know… browsers :-D).
                                                                                            And in the past I’ve worked with systems distributed over a couple of hundred thousand of such systems.

                                                                                            You are just confusing what is expensive with what is concretely possible.

                                                                                            Good architects and developers that keep things simple on a large scale are pretty expensive.

                                                                                            Experienced hackers that can analyze large amount of logs are even more expensive.

                                                                                            But it’s always a matter of what is at stake.

                                                                                            Believe me: when a well known bank realize that one of its customers faced a bug in their system that might cause them to be sued (and loose in court), it does not matter how much it costs, the bug will be reproduced, understood and fixed (and several other unrelated bugs will be identified and fixed in the process! There’s a great irony in this!).

                                                                                            And the reputation of a bank does not worth a human life. Or the discrimination of a minority.

                                                                                            Computer systems can be complex.

                                                                                            Actually the best ones has a low ratio between complexity and value provided. Thus they can evolve in a predictable and smooth way.

                                                                                            But they are always deterministic.

                                                                                            It’s just a matter of cost and competence.

                                                                                      2. 1

                                                                                        There are many NN approaches that are not deterministic. Actually, there are NN approaches that rely on noise.

                                                                                        False. Computers are deterministic.

                                                                                        So are humans?

                                                                                        1. 1

                                                                                          What do you mean? :-D

                                                                                      3. 1

                                                                                        No equality relation in the Codomain

                                                                                        shrug Who cares?

                                                                                        Re-reading the slide I realized that it might not be clear why this is relevant.

                                                                                        It’s related to the “complex” slide on Function.

                                                                                        Two functions are equal, if and only if

                                                                                        • they have the same domain
                                                                                        • they have the same codomain
                                                                                        • they follow the same rule

                                                                                        The rule part is what make the equality relation relevant.

                                                                                        If we state the equality as f(x) = g(x), we are assuming an equality relation in the codomain.

                                                                                        Instead stating that f(x) = y <=> g(x) = y, does not assume such relation.

                                                                                        But to avoid the need of equality in the Codomain, you need the co-implication.

                                                                                        But the only way to prove the co-implication without an equality relation in the codomain is to prove that each of the rules followed by the two functions can be logically deduced by the other one.

                                                                                        That is: the two rules are the same one expressed in different ways.

                                                                                      1. 1

                                                                                        This talk makes Haskell look pretty terrible when all it takes on any machine is:

                                                                                        curl -sSL https://get.haskellstack.org/ | sh && stack setup && stack new example

                                                                                        I love Haskell but many people in the community really love to overcomplicate everything and make it look as intimidating as possible. He eventually gets to the different parts of the above but the impression you would be left with is that it’s complicated instead of trivial.

                                                                                        1. 8

                                                                                          I don’t understand the argument. I don’t think Gabriel is over-complicating anything here. Quite the opposite:

                                                                                          He started out by just creating a directory with a Main.hs file and explicitly mentioning that for very simple programs and scripts, you can get away without creating an actual package and just use ghc directly to build it. He then shows how you can turn your directory into a simple package by adding a minimal *.cabal file, and now you can specify dependencies and use cabal build to build your package. Finally, he introduces stack, and explains the conveniences it would give you.

                                                                                          IMHO giving this step-by-step explanation is more helpful to a beginner, than diving right into stack new my_proj. I personally would’ve been more intimidated by the amount of “unnecessary” boilerplate stack new generates, than manually creating 2 files and typing out a few lines myself; when all I want to do (as a starter) is start writing my Hello World.

                                                                                        1. 20

                                                                                          I wish people would lay off some of this nonsense.

                                                                                          The idea that we know something about the computational capacity of the human brain is crazy. Walk up to any neuroscientist and tell them that a brain performs 1015 computations per second and at best you’ll get a blank stare. Even more than this, the idea that capacity in this sense is meaningful in any way is also nonsense. I can have an efficient algorithm that runs on my phone or an inefficient algorithm that runs on a supercomputer, both of which solve the same task. Who is to say how efficient the algorithms in the brain are? AI has nothing to do with computational power.

                                                                                          AI isn’t a matter of degree. We don’t actually understand the scientific problem of what intelligence is. We know of a few subproblems: object recognition, language, theory of mind, kinematics, etc. But don’t get the big picture. Who is to say there’s a single key to solving all of them?

                                                                                          The idea that we’re going to copy the brain in any meaningful way is also nonsense. There is no mechanism to image a brain at the level of detail required. And even if the brain is simply a neural network (in the CS sense of neural network; which it is not) recovering the weights even if we have the connections is going to be impossible.

                                                                                          The idea that somehow “evolution” is going to help is also nonsense. All evolution is is an optimization mechanism. Saying “evolution” doesn’t make the problem easier in any way; it’s the same as saying we’re going to try really hard to solve it.

                                                                                          The idea that intelligence is this 1D space and that all animals are ordered on it with us at the top is also totally unfounded. We don’t know how to measure the intelligence of an animal, nor do we have a clue about their relative intelligence or about ours relative to them. Maybe the difference between a human and a great ape is trivial or perhaps it’s huge. Who knows?

                                                                                          I could keep going as everything in this article is totally unfounded in reality.

                                                                                          1. 4

                                                                                            I could keep going as everything in this article is totally unfounded in reality.

                                                                                            Indeed explicitly so - I chuckled at presenting Back to the Future as some sort of scientific evidence.

                                                                                            1. 2

                                                                                              Heh. I spoke to an astrophysicist not long ago who was deeply upset at computer scientists repeating the notion of 280 as “the number of atoms in the universe”.

                                                                                              Not all particles are part of atoms, and in the case of dark matter, we can’t reasonably estimate the number of particles, since we don’t know the mass of each. Also, the only number we can even begin to consider is for the observable universe, which is substantially smaller than the entire universe. And no, the interesting stuff is not solely in the observable part. We do have a good number for the mass of the observable universe, but no way right now to guess what portion of it is in supermassive black holes, where it is not useful for computation.

                                                                                              I volunteered that this specific value probably got repeated so much because it’s the accepted number for cryptographic infinity. “Which is thought of as the amount of computation you could do if you converted the entire universe into a computer and ran it until the end of time, because that’s apparently a thing cryptographers fantasize about.” Executive summary: If that’s really what it’s an estimate of, it’s quite low.

                                                                                              Anyway, yeah, this “AI singularity” stuff is coming from the same people who put “first multicellular life” and “transistor” on the same chart. Just for your amusement value.

                                                                                              1. 2

                                                                                                The idea that we’re going to copy the brain in any meaningful way is also nonsense. There is no mechanism to image a brain at the level of detail required. And even if the brain is simply a neural network (in the CS sense of neural network; which it is not) recovering the weights even if we have the connections is going to be impossible.

                                                                                                What is your opinion of Blue Brain Project? They use biological neurons and their strategy to model neurons and recover weights seems eminently sound.

                                                                                                1. 2

                                                                                                  I’ll let hundreds of neuroscientists speak for themselves in the open letter to the European Commission (click “Read the full letter” at the bottom, you can see the list of the 800 or so researchers that have signed so far). There was also some public press. I have yet to meet anyone serious in academic circles that didn’t agree with this position.

                                                                                                2. 1

                                                                                                  I wish people would lay off some of this nonsense.

                                                                                                  I feel you. The problem is no one can demonstrate, with exactness, which part is (or is not) nonsense and should be laid off. There are impressive tools, an obviously impressive goal, progress in some areas but no overall perspective on what’s happening. Sure, most sober minds will say whole brain simulation is a crock, that’s what I believe but there are a lot of ideas out there and the problem of which approach to “prospect” is very hard. Full brain simulation in overall ignorance of the brains' function may be a fool’s errand but trying to simulate the brain may give insights that wind-up usable elsewhere. The idea of a single dimension to intelligence also seems to me fatally flawed, seems like the basic mistake that AIG boosters (or worriers) like Nick Bostrom, Eliezer Yudkowsky and the author of this article make. But it’s hard for me to say that with certainly unless I can demonstrate what intelligence actually is.

                                                                                                  As you say, we don’t know how the brain works and we don’t know what full intelligence is. That might mean the problem is unsolvable and the AIG folks are wasting their time. That might mean the problem is easier than we imagine and a few breakthroughs can prove the naysayers wrong. Extreme ignorance creates these paradoxes. If the problem was colonizing a continent of known size or visiting a planet of known distance, the difficulties would have quantities attached. Without that, we’re have “unknown unknowns” but the provision that since human intelligence has a material structure and software breakthroughs have happened, the prospects aren’t purely speculative. The situation might be comparable to the earlier European conquest of the Americas - the explorers were ignorant of the land and uncertain what the payoff would be - yet all this ignorance didn’t mean that found nothing or the quest wasn’t worth it.

                                                                                                1. 12

                                                                                                  Adding a full garbage collector to scheme2c and writing a very long blog post on how to reason about coinductive structures in Haskell

                                                                                                  1. 6

                                                                                                    Even if only for ideas, you might want to take a look at the Memory Pool System. Definitely my favorite GC implementation to work with, great people and great docs.

                                                                                                    1. 2

                                                                                                      Looks cool! Thanks!

                                                                                                    2. 3

                                                                                                      Would you consider changing the name of your project?

                                                                                                      Scheme->C (also styled scheme2c; https://github.com/barak/scheme2c) is a pretty well-known Scheme implementation (at least in circles that care about these things) by Joel Bartlett with a 20 year history. I fear there’s going to be a lot of confusion out there. It has a few users that use it for research (most of the code I wrote for my PhD was in Scheme->C and it’s pretty capable. I ran it on robots, to run fMRI experiments, on a supercomputer, etc; it still works quite well and has gotten quite a few upgrades to make it feel more modern). There’s some hope that one day it might get more mindshare again as it’s quite fast (we ran live computer vision demos with it) and capable compared to most schemes out there.

                                                                                                      1. 2

                                                                                                        Oh I had no idea, I can do that just as soon as I have a computer to work on :)

                                                                                                        Thanks for the heads up!

                                                                                                        Update: Renamed it to c_of_scheme after the OCaml to JS compiler. No results on Google pop up for this :)