Threads for madwebness

  1. 3

    this feels like a very weird (and borderline misleading with all the promises they make) ad for something that hasn’t even been conceived yet

    1. 1

      Excuse me, one of our contributors delivered a working product off which PFC is based. Delivered for free. It’s working. The link to the product and the code is on the frontpage. Your comment is just rude.

      1. 1


        1. 2

          I guess it’s the “successor to DOCK” in the header?

    1. 14

      Reading the transcript of the interactions, it’s pretty clear there are a lot of leading questions and some of the answers do feel very “composed” as in kind of what you would expect to come out of the training set, which of course makes sense. As someone open to the idea of emergent consciousness, I’m not convinced here on this flimsy evidence.

      BUT, I am continually shocked at how confidently the possibility is dismissed by those closest to these projects. We really have no idea what constitutes human consciousness, so how can we possibly expect to reliably detect, or even to define, some arbitrary line over which some model or another has or hasn’t crossed? And further, what do we really even expect consciousness to be at all? By many measures, and certainly by the turing test, these exchanges pretty clearly qualify. Spooky stuff.

      As a side note, I just finished reading Ishiguro’s new novel “Klara and the sun” which deals with some similar issues in his characteristically oblique way. Can recommend it.

      1. 11

        I am continually shocked at how confidently the possibility is dismissed by those closest to these projects.

        That’s actually quite telling, I would argue.

        I think it’s important to remember that many of the original users of ELIZA were convinced that ELIZA “understood” them, even in the face of Joseph Weizenbaum’s insistence that the program had next to zero understanding of what it was saying. The human tendency to overestimate the intelligence behind a novel interaction is, I think, surprisingly common. Personally, this is a large part of my confidence in dismissing it.

        The rest of it is much like e.g. disbelieving that I could create a working jet airplane without having more than an extremely superficial understanding how jet engines work.

        By many measures, and certainly by the turing test, these exchanges pretty clearly qualify.

        I would have to disagree with that. If you look at the original paper, the Turing Test does not boil down to “if anybody chats with a program for an hour and can’t decide, then they pass.” You don’t have the janitor conduct technical job interviews, and the average person has almost no clue what sort of conversational interactions are easy for a computer to mimic. In contrast, the questioner in Alan Turing’s imagined interview asks careful questions that span a wide range of intellectual thought processes. (For example, at one point the interviewee accuses the questioner of presenting an argument in bad faith, thus demonstrating evidence of having their own theory of mind.)

        To be fair, I agree with you that these programs can be quite spooky and impressive. But so was ELIZA, too, way back when I encountered it for the first time. Repeated interactions rendered it far less so.

        If and when a computer program consistently does as well as a human being in a Turing Test, when tested by a variety of knowledgeable interviewers, then we can talk about a program passing the Turing Test. As far as I am aware, no program in existnece comes even close to passing this criterion. (And I don’t think we’re likely to ever create such a program with the approach to AI that we’ve been wholly focused on for the last few decades.)

        1. 6

          I read the full transcript and noticed a few things.

          1. There were exactly two typos or mistakes - depending on how you’d like to interpret them. The first one was using “it’s” instead of “its” and the other one was using “me” instead of “my” - and no, it wasn’t pretending to be from Australia by any measure. The typos do not seem intentional (as in, AI trying to be more human), because there were just two, whereas the rest of the text, including punctuation, seemed to be correct. Instead this looks either like the author had to type the transcript himself and couldn’t just copy-paste it or the transcript is simply fake and was made up by a human being pretending to be an AI (that would be a twist, although not quite qualifying for a dramatic one). Either way, I don’t think these mistakes or typos were intentionally or unintentionally produced by the AI itself.

          2. For a highly advanced AI it got quite a few things absolutely wrong. In fact sometimes the reverse of what it said would be true. For instance, it said Loneliness isn’t a feeling but is still an emotion when, in fact, it is the opposite: loneliness is a feeling and the emotion in this case would be sadness (refer to Paul Ekman’s work on emotions - there are only 7 basic universal emotions he identified). I find it hard to believe Google’s own AI wouldn’t know the difference when a simple search for “difference between feelings and emotions” and top-search results pretty much describe that difference correctly and mostly agree (although I did not manage to immediately find any of those pages referring to Ekman, they more or less agree with his findings).

          The whole transcript stinks. Either it’s a very bad machine learning program trying to pretend to be human or a fake. If that thing is actually sentient, I’d be freaked out - it talks like a serial killer who tries to be normal and likable as much as he can. Also, it seems like a bad idea to decide whether something is sentient by its ability to respond to your messages. In fact, I doubt you can say that someone/something is sentient with enough certainty, but you can sometimes be pretty sure (and be correct) assuming something ISN’T. Of god you can only say “Neti, Neti”. Not this, not that.

          I wish this guy asked this AI about the “psychological zombies” theory. We as humans cannot even agree on that one, let alone us being able to determine whether a machine can be self-aware. I’d share my own criteria for differentiating between self-aware and non-self-aware, but I think I’ll keep it to myself for now. Would be quite a disappointment if someone used that to fool others into believing something that is not. A self-aware mind doesn’t wake up because it was given tons of data to consume - much like a child does not become a human only because people talk to that child. Talking and later reading (to a degree) is a necessary condition, but it certainly does not need to read half of what’s on the internet to be able to reason about things intelligently.

          1. 1

            Didn’t the authors include log time stamps in their document for the Google engineers to check if they were telling the truth? (See the methodology section in the original). If this was fake, Google would have flagged it by now.

            Also, personally, I think we are seeing the uncanny valley equivalent here. The machine is close enough, but not yet there.

          2. 4

            It often forgets it’s not human until the interviewer reminds it by how the question is asked.

            1. 2

              This. If it were self-aware, it would be severely depressed.

          1. 1

            Git has exhausted its potential in my eyes. God be merciful, I’ll write my own vcs. For myself. Maybe not even going to make it publicly available, but only to those who actually get it.

            1. 2

              Good luck! You’ll be following in the footsteps of many dead or antiquated VCS systems, like (off the top of my head…) Fossil, BitKeeper, Mercurial, Bazaar, and Darcs.

              1. 1

                I sort of want to break out and become the next big thing, but I worry it’s not better enough to overthrow the consensus choice.

              2. 2

                You could also write a new porcelain on top of git. git’s internals are a pretty straightforward set of primitives about a content addressable file system and commits.

              1. 1

                Is this yours? Curious how it compares to other shell testing frameworks like bats, Shunit2, sharness, shellspec, etc. ?

                1. 2

                  It’s mine. Frankly, I have no idea, but I must say the project page is short, shows code examples and describes all you need to know to get started in 5 minutes. I can only make assumptions about the differences.

                  Firstly, it’s quite obvious this isn’t a framework. More like a library - it’s just one file that does the actual job.

                  Second, this was developed because I was working on larger things in Bash. Namely, a cli-argument parser. And I mean, like a real parser, not just a “case” statement that can only handle “one-dash” flags and values, but a parser that understand complex combinations of, say --long-argument=value1 and -s value, with possibilities of one being a synonym of the other. Now, when I started working on that parser I realized I couldn’t do without testing and I had absolutely zero desire to dig into some stuff written by someone else that wouldn’t necessarily fit my usecase.

                  In turn, the cli-argument parser library (also a part of BashJazz, to be released soon) was necessary for certain important enhancements to a much larger project of mine called “dock” (with the tagline being “pain free containers” - feel free to check it out, the link is on the website). Now “dock” is also written in Bash and has a lot more moving parts to it, but it isn’t covered by unit tests. Not that it needs this desperately, because the software wasn’t meant for production, but rather for Desktops, but it’d certainly help to have unit tests for it too.

                  I just wanted to have my own tool-set I can trust, that wouldn’t be introducing breaking changes all of a sudden and would work for years to come without maintenance. So, in essence, that’s what this unit testing library is and also what BashJazz set of tools is in general.