1. 27
  1.  

  2. 9

    This article’s generalizations about philosophy feel like one more case of the blind men and the elephant. Take the claim that philosophers are typically architecture astronauts. I wouldn’t find that a particularly helpful description of anyone, but I could see how someone might reach for the term when talking about Kant or David Lewis[0], but there are vast numbers of other philosophers it wouldn’t apply to.

    (I’m biased of course: I did four years of undergrad and four years of graduate school in philosophy before eventually entering software).

    [0] I suspect that’s a rare pairing of names in the literature.

    1. 1

      I wouldn’t find that a particularly helpful description of anyone, but I could see how someone might reach for the term when talking about Kant or David Lewis[0], but there are vast numbers of other philosophers it wouldn’t apply to.

      I probably know way less about this than you but I do have a thing for Kant: architecture and law are commonly associated with him but I don’t think that translates neatly into this narrow interpretation that I agree this post is slightly pushing, with his whole “undermine whole of Western philosophy”, including Newton, Leibniz, Lock, Hume and Descartes shtick.

    2. 5

      Oh whoa, this is cool.

      Lately, what with our industry’s ongoing ethical crisis, I have been thinking a lot about the opposite - what programmers can learn from philosophers. It’s neat to think there could be benefits in both directions.

      1. 1

        The problem is that a good part of the academia is somehow stuck in ethical discussions about tech that have no connections with the actual existing tech.

        “If we mind-transfer a human consciousness into a jar of pickled ginger, is that jar human? And is it now asian or its original race? To solve this ethical problem, we must consider the possibility that psychedelics are just a block-chain based communication medium with aliens from another dimension”

        1. 2

          In my experience, those discussions are not about tech at all. They just use tech as a magic wand to enable abstract consideration of (for example) “What is a person?”

          1. 1

            Yes but exactly because they are not about tech, they are easier for people without understanding of the current tech and lot of academics and armchair philosophers prefer to stay in that comfort zone instead of dealing with the complexities of going beyond the simplified and abstract idea of an “AI” and connect it to the actual software engineering and machine learning that exist behind the veil. Only a few can do that and a good part of the discourse doesn’t delve into these cross-field topics.

            1. 1

              It is usually the programmers who overestimate their knowledge of AI and philosophy. For example thinking that it has something to do with the minutiae of classifier algorithms rather than agent-based, decision-theoretic analysis that deals with value-alignment of supercapable goal-achievers.

              See: intelligence.org

              1. 1

                It is usually the programmers who overestimate their knowledge of AI and philosophy. That will always be the case, lol. But still, mines are more considerations about the parts of philosphy research that reach and influence the public discourse. (for public I mean the discourse space that exists right outside the academia, i.e.: tech people interested in philosophy and ethical issues)

                For example thinking that it has something to do with the minutiae of classifier algorithms rather than agent-based, decision-theoretic analysis that deals with value-alignment of supercapable goal-achievers.

                I’m not saying they should care about the details, that would be silly and pointless. But they should, for example, stop treating algorithms as total black boxes that the Gods sent down on earth like most fearmongers like to do so that they can denounce modern technology as evil. Usually this doesn’t look like good philosophy.

                See: intelligence.org

                Yeah I’ve seen them before. I haven’t read anything from them but I can believe they are not part of what I’m describing. I hope more groups will try to tackle the issue and assert themselves more in the future. Maybe this afternoon I will read something they published, thank you for the reminder.

      2. 4
        1. 0

          +100

        2. 3

          Here’s a link to a small commentary with the original definition of grue.. Like most philosophy, you’re probably well served by reading the background on Hume (I think) and inductive reasoning that he’s writing in response to–I’m not a philosopher, though, so I could be flagrantly wrong.

          Philosophy could be defined as the study of dead humans shitposting at each other over the centuries.

          1. 2

            Practical ontology in the style of Barry Smith has learned from computer science and many other sciences, especially efforts in medical and biological ontology. They work with formal ontologies using RDF and automatic reasoning. It’s fascinating to read about.

            1. 7

              Thanks for making that connection. I have a PhD in philosophy and now my career is working with the biological and biomedical ontologies you’re talking about: http://obofoundry.org. Happy to answer any questions.

              It has certainly been my experience that philosophy and programming have a lot to teach each other.

              1. 2

                Very cool! I’ve only sporadically read some books and articles about BFO. Before that, my interest in conceptual modelling was stimulated by Christopher Alexander (community-driven pattern languages for architecture) and Eric Evans (domain-driven design for software).

                I’d be super curious to hear just what kind of problem you’re working on at the moment, or some recent success (small or large) in ontological modelling or problem-solving that made you feel excited about being in the field. :)

                1. 3

                  My graduate work was in philosophy of science. Along the way I learned to program, and like the author of the article I still really enjoy the challenge of breaking a problem domain into clean, orthogonal pieces, then writing code to make the solution actually work. Now I’ve found a niche where I can combine technical and conceptual skills to help scientists do better science.

                  I’m deeply involved in the Open Biological and Biomedical Ontologies community, which brings together a large number of open source, scientific ontologies under a set of shared best practises. I contribute to several ontologies, including the Ontology for Biomedical Investigations. My main client is the Immune Epitope Database, where I help with data integration and validation. I’m most excited about building tools to help scientists develop and work with ontologies and linked data. Two examples of that are ROBOT and Knotation (work in progress).

                  It’s not my work, but one of the most impressive examples in this area is the Monarch Initiative. They’ve been able to integrate genotype and phenotype data across multiple model organisms to help understand rare diseases in humans. Much harder than it might sound!

                  1. 1

                    Knotation looks really good. It reminds me a bit of Inform 7, which by the way also has a pretty interesting basic ontology.

                    It seems somehow obvious that “IT systems” would benefit from having clear ontological structures that can be expressed in readable ways. For example, I would love to be able to study the ontology of Photoshop as a way to learn how to think about using that program. It would be a bit like looking at the type hierarchy of the program, or its storage schema—but those are usually somehow convoluted compared to how you would really describe the ontology to a user.

                    It’s really fascinating to browse around in the Monarch site! It feels somehow comforting in this purported age of “post-truth” to see these obviously high quality arrangements of scientific knowledge.

                    1. 2

                      Thanks for reminding me about Inform 7 – I’ll take another look at it.

                      I agree that the set of types (and their relations) for a program can sometimes be close the ontology its domain. That’s also part of the promise of Object Oriented software, and Domain-Driven Design that you mentioned. In practise, both the types and the objects are often more about the implementation than the domain. Good technical documentation should break a software system down into a set of clear concepts, allowing the user to form an effective mental model. The worry is that any such abstraction will be “leaky”. Having some sort of implementation, in a program or formalism, forces you to confront those leaks and edge cases.

                      The ontologies I work with often support more detail about the domain than anybody actually wants to implement in their database or system. Programmers are likely to think that this makes them over-architected. But that detail lets us do data integration between systems that are talking about the same domain using different abstractions suited to their different use cases.