1. 33
  1.  

  2. 5

    Paper: https://www.osapublishing.org/prj/abstract.cfm?uri=prj-7-8-823 (free access if you solve their captcha)

    My summary after a quick read: they’re using (intentionally made?) imperfections in glass to implement analog logic. Signals are light in and light out.

    This would be useful for some kinds of computation, but notably you still need amplifiers for things like feedback or fanout. I don’t think they can do inversion of signals in this, so it wouldn’t be able to implement all logic or analog operations (eg subtraction) either, unless they can do clever stuff with lasers phase-cancelling or something.

    It looks like the “AI” has to do with how they designed the glass on their computer, not how the glass operates IRL.

    I may have missed things, please advise if you read differently. A lot of the language is too thick for me.

    1. 10

      Didn’t they effectively implement a trained model in glass? I think that’s what is meant by AI here.

      1. 4

        Yeah you need amplify the signal afterwards since the nonlinearities absorb some of the light by design.

        As can be read in the introductory chapter, the main contribution here is the tiny size of their system:

        The size of the NNM is 80λ by 20λ, where λ is the wavelength of light used to carry and process the information

        I don’t know anything about lasers but given the typical wavelengths we are talking about a few micrometers here.

        The toy problem they are solving is the classic MNIST dataset with a 5000/1000 training/test split trained with SGD (minibatch size of 100) to minimize the standard cross entropy loss. The final accuracies are 79% and 84% for the 2D and 3D model respectively.

        The model itself is pretty insane though: a piece of silicon dioxide with some blobs of scatterers and absorbers dropped in. If I understood correctly, the scattering corresponds to the linear matrix products in a regular layered ANN, since it fires the incoming light in multiple directions depending on the shape of the blob.

        The non-linearities are modeled after the ReLU activation function (it’s just max(0, x)) and are randomly placed and kept constant during the training. They absorb all incoming light up to a limit when they slowly start emitting again.

        Even though “just using Gradient Descent” sounds simple, it seems like the actual gradient calculation is very involved. This is understandable since they have to simulate sub-wavelength diffractions in a physical system :) I don’t claim to have understood very much of this part, but it appears they incrementally grow the “linear” blobs during the optimization, so it may be enough to evaluate the gradients only at the boundaries of the diffraction blobs.

        I think this is a pretty clever way to approximate functions and I look forward to CNN architectures using the same ideas. I just wish they would’ve called the method “Inference by Interference” :)

        1. 1

          It’s clearly intentionally made, it’s not like they are gonna pick a random piece of glass and it “recognizes” numbers already.

        2. 2

          I believe this has implications for how we can build CPUs that use light, in the future.

          Also, imagine instantly seeing the SHA256 hash of given data using this technique.

          1. 7

            I think that would need feedback loops, but I think this works only for feed-forward networks.

            1. 1

              Probably, yes

          2. 1

            Is AI used to reduce the number of iterations required to reach a scatter of imperfections where there is statistically a high enough likelihood that a ‘2’ shape of various forms will send more photons to a particular spot?

            Or: is the glass itself effectively a machine learning engine, but with the training part done on a computer first - and then the produced glass being the trained model?

            I’m thinking both (they’re the same thing?), but someone who understands this better may be able to set me straight!

            1. 2

              It sounds to me like the glass is the model, and the training process involves manufacturing a new piece of glass repeatedly. I don’t think it means much to say that AI is used to reduce the number of iterations; modern machine learning techniques (any technique that has “differentiable” in its name) will learn as much as there is to be learned from any new information that’s provided, so any modern ML technique is using as few iterations as it knows how to do. (Edit: added a word for clarity)

              1. 1

                They probably trained using normal tools and have software to convert the final trained ANN to glass.

                1. 1

                  I have very vague memories from a college optics course about techniques for altering the RI of glass (or other materials) using lasers and stuff, and I remember that different wavelengths can have different RI (that’s how prisms work), so I suspect you’re right – alter the RI to prefer certain WLs in accordance with the weights of the NN, and voila – physical NN.

                  I imagine you could do something similar with valves and pipes – restricting the flow of water/increasing it via pressure-sensitive valves or w/e. That would be a pretty cool thing to see built.