1. 1

Abstract: “A representation can be seen as a set of variables, known as features, that describe a phenomenon. Machine learning (ML) algorithms make use of these representations to achieve the task they are designed for, such as classification, clustering or sequential decision making. ML algorithms require compact, yet expressive, representations; otherwise they might take too much time to return an output, or not have enough information to correctly discriminate the phenomena being presented with. Representation learning is the subfield of computer science that deals with the automatic generation of representations (as opposed to human engineered representations). Representation learning has achieved notoriously good results in the recent years thanks to the emergence of massive layered structures comprised of non-linear operations, known as deep architectures, such as Deep Neural Networks (DNN). DNN, and other deep architectures alike, work by gradually reducing and abstracting the input representation in each successive layer. In this technical report, we describe a research proposal to develop a new type of deep architecture for representation learning, based on Genetic Programming (GP). GP is a machine learning framework that belongs to evolutionary computation. GP has already been used in the past for representation learning; however, many of those approaches required of human experts knowledge from the representations’ domain. In this proposal, we explore the pitfalls of developing representation learning systems with GP that do not require experts’ knowledge, and propose a solution based on layered GP structures, similar to those found in DNN.”

  1.  

  2. 1

    For those new to Genetic Programming, you might like the Humie Awards that catalog instances where evolved solutions matched or exceeded human designers. Main site here with tons of links.