1. 45
  1.  

  2. 6

    Very interesting!

    Where does it come from though? Who has created this? How was this AI built? What data was fed into this AI to train it? Is a new image generated each request? What hardware is this running on?

    1. 5

      A link to the creator’s comment on HN.

    2. 6

      I wonder how similar it is to the most similar-looking face in the dataset. Probably very different, but it seems like something worth checking.

      1. 10

        Computer scientists should not be working on this kind of software. This is absolutely mental. Aside from the existential terror of nothing you see outside of your immediate area being able to be confirmed real, the sociopolitical possibilities this has with mass generating fake content is astounding. Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.

        1. 17

          I have three separate things to say in response.

          1. Computer scientists in general don’t have an effective means for collectively deciding what kind of work is unethical, let alone enforcing such decisions. Perhaps your appeal would be better addressed to lawmakers? But if you follow that thread, you’ll immediately find some difficult problems. Regulating the development of new technologies is not so easily done.

          2. You may find this particular instance or application of GAN technology to be creepy, but I imagine that you’d find some other applications harmless or benign, and yet others obviously desirable. In any case, it’s a very general technique and a fertile research area over the last few years. It’s not going away, and it’s much too late to stop. Finding ways to mitigate the potential harms that you forsee would be both more achievable and more effective.

          3. Epistemology was already more difficult than most people would care to acknowledge. In particular, realistic photos have been getting easier and easier to convincingly fake for decades already. So, even if you had the power to ban certain technologies or research areas, where would you draw the line?

          1. 2

            Computer scientists in general don’t have an effective means for collectively deciding what kind of work is unethical, let alone enforcing such decisions. Perhaps your appeal would be better addressed to lawmakers? But if you follow that thread, you’ll immediately find some difficult problems. Regulating the development of new technologies is not so easily done.

            Each individual person can absolutely decide whether or not to do unethical work, especially computer scientists, who can’t even use the ‘but I have to pay the bills’ excuse given the shortage of qualified people in that field.

            1. 2

              True, but irrelevant.

              1. 1

                Computer scientists should not be working on this kind of software.

                Your justification:

                Each individual person can absolutely decide whether or not to do unethical work

                means that we both agree to have different opinion on that, which comes back to the fact that you cannot express the interdiction of working on it.

            2. 24

              Computer scientists should not be working on this kind of software.

              Malicious actors will work on it regardless. The sooner that people gain awareness of this sort of software, the less damage it can cause.

              1. 1

                I kind of doubt that trust-less society is one where damage was minimised.

              2. 10

                Aside from the existential terror of nothing you see outside of your immediate area being able to be confirmed real

                That was already true. Plato and Descartes and all that.

                1. 6

                  I have the same visceral reaction as you. However, a prohibition on research is hopelessly unenforceable, especially on a global scale. So our focus should be on how to deal with this technology, not how to impede it.

                  But the more fundamental problem is this: if we prohibit any research directions that lead to immediate objectionable results, we may cut ourselves off from discovering positive results deeper in the same line of inquiry (forensic face reconstruction, etc.)

                  1. 3

                    Why does everyone think I support prohibition or regulation? I didn’t mention either

                    1. 4

                      What do you support?

                      1. 1

                        Common sense

                      2. 3

                        What do you mean by “should not”, then?

                        1. 2

                          Rather don’t than do

                    2. 3

                      In defense of pure science:

                      It’s a function that synthesizes data, and is trained to produce data that we recognize as looking like human faces. Quite apart from the interesting questions like “can we take this apart and learn more about how humans recognize faces”, you can use the same techniques to generate all sorts of things. Simulations of machine sensor readouts for reliability testing and monitoring? Sure, you can make lots of realistic, different, simulated jet engine failures without having to blow up as many jet engines. Chemical reactions? Totally, it can grovel through databases of drug molecules and try to produce things a chemist would look at and say “hm, that’s worth checking out”. And it can even be pitted against image recognition algorithms to generate camouflage, patterns or images that to us look like, say, a turtle but which fool the other AI into classifying it as “Land Rover”.

                      In conclusion, this image.

                      1. 2

                        You don’t need neural networks to manipulate photographs for nefarious purposes. Photoshop has been around for decades now anyway.

                        While I’m the first one to admit current state of Deep Learning research sometimes seems like a cruel joke, this particular technology (Generative Adversarial Nets) is actually useful and has been applied to single image superresolution and color constancy tasks, among many others.

                        It’s not about fake celebrity faces.

                        1. 1

                          If you consider this unethical, what do you think about this video?

                          1. 1

                            I’m less bothered because it’s not a completely automated process, at least with Funny Obama the fake content requires editing and rigging and an actor to say his lines, etc…

                            1. 1

                              No. It is easy. There is (was?) an app for that. Surprise, it mostly got used for porn.

                              1. 1

                                It’s easy, but the process of having an impressionist speak on behalf of a render is less so, especially automated

                        2. 4

                          Teeth have improved since the last such thing I saw. Still has trouble with hair, clothes, and sometimes ears and wrinkles.

                          I’d personally find it handy to have a way to link to individual photos, to show features and flaws. Still, nice work.

                          1. 4

                            When it messes up, the results are nothing short of pure nightmare fuel: https://i.imgur.com/6UcKzbS.jpg

                            1. 3

                              I’d really like to see these randomly mixed with pictures of real people because they don’t look real to me. I get an ‘uncanny valley’ feeling when I look at them.. mismatch between different facial features.. eyes, skin, and face shape. Maybe that’s just confirmation bias because I was told they are fake.

                              Does anyone else have this?

                              1. 3

                                Interesting that it seems to have a really hard time mixing children’s features and adult features in a believable way. Also children should probably not be in this dataset, although I’m not sure I can solidly justify that assertion.

                                1. 2

                                  Oh god by far the creepiest is when it gives a child a facial expression which could only occur with the fully developed facial muscles of an adult. Yikes, that will haunt my dreams.

                                2. 1

                                  I wonder how hard it would be to train an AI that can tell generated faces from real ones. I’ve heard concerns from people suggesting that e.g. twitter bots could use these generated faces as profile pics to be harder to detect (they often reuse a number of stolen pictures right now and that makes them somewhat easier to detect) but I feel it’s probably feasible to train an AI to detect these generated faces with high enough confidence that you’d get a better fingerprint for bots than with stolen pictures.