1. 4

    Chollet’s arguments read like pop sci handwaving while Yudkowsky’s rebuttal is pleasantly rigorous.

    On a related note, I often observe two huge cognitive failures around big issues like climate change or AI.

    One is in assessing possibilities and risks. AI explosion may not be probable but the risk is that it’s possible, and the potential negative consequences are huge, so caution is definitely warranted. Yet many people hasten to deny the possibility altogether by using weak or totally irrational justifications.

    The other is a failure to grasp non-linear effects or insistence on linear behaviour contrary to evidence (eg Chollet seems to be asserting that progress can only ever be linear, without really substantiating).

    1. 6

      climate change is a much bigger threat, because it’s super risky and super likely (certain, even).

      Apart from that, I’m not very convinced by the rebuttal. I still don’t believe in exponential curves in nature (the GDP? recent notion, moving target, not even clear what it measures). Progress in science becomes slower whenever a field matures; good luck making breakthroughs in areas of maths that are 2 centuries old. It should be the same for an hypothetical self-improving AI: it’s smarter, but making something even smarter is also much more difficult, so in the end it’s all about diminishing returns.

      1. 2

        Totally agree about climate change. What I was trying to say is: even if one takes the position that the worst effects of climate change have very low probability (contrary to established science), the consequences are so grave that action has to be taken regardless. But this obvious conclusion is lost on many people for some bizarre reason.

        It’s a similar story with AI. As soon as we establish that there is a possibility of superintelligent self-improving AI, we have to understand that there are huge risks associated with that, and have to proceed with caution rather than burying heads in sand.

        To your points:

        • I think the important thing is not to be convinced by the proponents of intelligence explosion, but rather to recognise that nobody has proof that it’s impossible.
        • We don’t need to find exponential processes in nature, because we’re not talking about a naturally occurring process (and it wouldn’t prove anything one way or another, anyway).
        • Progress in science, I believe, is pretty much impossible to measure, and I’m not sure that it has much relation to self-improving intelligence.

        Somewhat tangential to this discussion: for the purposes of assessing the risk of AI, it’s useful to take a broader perspective and realise that AI, in fact, doesn’t need to exceed human intelligence or be an autonomous agent to cause a lot of problems. In this context, arguments about the possibility of intelligence explosion are a distraction.

        1. 1

          As soon as we establish that there is a possibility of superintelligent self-improving AI, we have to understand that there are huge risks associated with that, and have to proceed with caution rather than burying heads in sand.

          That’s like calling for planetary defences against an alien invasion because the discovery of unicellular life on Mars is imminent.

          We don’t have strong AI. The pattern matching we call “AI” right now is nowhere near that, yet we are supposed to believe that the qualitative jump is imminent. I’ll go with the voice of reason on this one.

          1. 2

            This piece on when technological developments are worth worrying about was a nice read on the issue. Not sure I’m convinced, but it’s at least taking seriously the question of whether anyone should care yet.

            1. 1

              But how do you determine what the voice of reason is? There are many reasonable people advising caution it seems. Are you sure you’re not going with comforting beliefs rather than reason?

              1. 1

                But how do you determine what the voice of reason is?

                By the amount of changes of past mechanisms needed to fulfil the prophesied future and my own knowledge of medicine and software engineering.

                1. 1

                  It’s not quite clear to me why expertise in medicine or software engineering is relevant to forming a reasoned position on intelligence explosion. (Let me know?)

                  I guess you might instead be referring to expertise in machine learning, AI, and neuroscience, in which case I’d love to learn your reasoning for why it’s impossible for intelligence explosion to occur (as long as it’s more substantial than reasoning by analogy, historical or otherwise).

        2. 2

          Got a link to the rebuttal?

        1. 1

          Ha!

          ’tis true: C++ can encode calling up monsters from the vasty deep.

          1. 3

            And then, destructors are not that virtual, are they?

            1. 3

              Why, so can I, or so can any man;

              But will they come when you do call for them?

            1. 3

              Meh. The advantages some programming languages bring to the table are sometimes very significant. They don’t just make the problem “slightly easier”. It’s likely that Go is popular because it makes concurrent networking programs significantly easier to write, compared to most mainstream languages; similarly, using OCaml (or something similar) to write a compiler or symbolic program is a huge improvement over doing it in C.

              1. 3

                It’s not about the language per se and more about how many primitives the language integrates, and how well chosen those primitives are.

                C has no automatic memory management nor concurrency primitives, and Go has both.

                Among languages that use async/await-style concurrency, their concurrency expressiveness is largely similar.

                All the “P languages” form a family based on the set of primitives they’re built on, in which they’re very close to each other, and so programs written in them tend to be structured broadly similarly, despite significant differences in some of the design choices of the languages. Sometimes those differences have practical impact too, but rarely from a zoomed-out perspective on the code structure.

                So the answer to “which language should I learn?” is fairly irrelevant if it’s to be taken as “which P language should I learn?” but is rather more meaningful if it implies “should I use Go, Haskell or Prolog?”. (Although even then it’s just one topic among the many you need an understanding of, as the article says.)

                1. 1

                  On the other hand, none of these languages have improved the ways people use their databases, write their queries, set their indices, deploy their servers, configure their networks, ….

                  Programming languages bring a lot to the table, but they are not the core of dealing with computers anymore. It’s a huge chunk, but not as central as people make them to be.

                  1. 2

                    Though not Go or OCaml, all the tasks you describe benefit from declarative languages, like SQL and Prolog. (Or Greenspunned versions thereof)

                    1. 2

                      Sure, if you span the net wide enough, you could also call Elasticsearch query syntax (which is bascially the AST of a simple Lucene search program) a programming language. This isn’t practical though and not what people mean by “I’ll learn another programming language”.

                      SQL is a perfect example of that: it is rather worthless to know without at least having a hunch on how your specific database executes it. Plus, each of the product comes with extensions.

                      1. 2

                        it is rather worthless to know without at least having a hunch on how your specific database executes it

                        I feel this is deeply true of any programming language — it is mostly useless divorced from an implementation. I feel that knowing how to program in C is inseparable from knowing compiler extensions and intrinsics. And with the exception of (seemingly increasingly rare) languages defined by standards, one may not have any choice.

                        One difference between logic languages and imperative languages, here, is that most programmers have already deeply internalized a mental model of how imperative languages are executed (which still often fails to match the actual implementation… note the way one still finds people making performance assumptions that held perfectly well on the ZX Spectrum and not in the modern era).

                        Maybe we actually agree on something here: I think something the OP is successfully pointing out is that most people’s definition of “I’ll learn another programming language” is so shallow that it yields little compared to the effort they could put into learning other things. But, for example, I think learning something like Prolog (well enough to write production software: i.e., understanding at least one implementation well enough to reason accurately about performance and so on) is an exercise that yields knowledge transferable to plenty of other areas of programming; I suspect one can make this argument for any language and implementation that differs significantly from what one already knows.

                      2. 2

                        Like SQL and Prolog = Datalog. Seems like a good example where a new language can help with database queries.

                        https://en.wikipedia.org/wiki/Datalog

                  1. 3

                    Writing a SMT solver… in OCaml. Taking this as an opportunity both to see how far I can push performance in OCaml for such CPU-intensive programs, and to get a better understanding of MCSat (an new-ish approach to SMT).

                    1. 1

                      I tried OCaml for a bit but the weirdness got to me after awhile. There was a ton of magic around project setup and compilation that I didn’t understand and couldn’t find properly explained, and the fact there is more than one “standard” library bugged the heck out of me. I’m hoping that once the Linux story solidifies a bit more around .NET I’ll be able to reasonably give F# a shot.

                      1. 3

                        I’ve been using F# on Linux for a few years now using Mono. It’s a bit more manual than .NET Core, but it’s stable.

                        1. 3

                          If you’re interested in trying again, I created a build system (yes, yet another one) specifically designed for getting going fast in most cases. I have a blog post here:

                          http://blog.appliedcompscilab.com/2016-Q4/index.html

                          Short version: all you need is a pds.conf which is in TOML so fairly straight forward, a specific directory structure (src/<project>) and GNU Make. Then you run pds && make -f pds.mk and you’re done. Supports tests as well as debug builds.

                          1. 5

                            I’m not sure it is worth pushing yet another build system that seemingly nobody uses (at least I haven’t yet run across a package which uses it) when jbuilder seems to be gaining so much momentum in the OCaml world lately.

                            1. 3

                              Maybe, but pds is pretty easy to port away from for most builds and it’s so trivial to get started and much less confusing than jbuilder’s config, IMO. My personal view is that jbuilder is a mistake but I’ll wait to switch over to it once it’s gained enough momentum. At that point, I can just switch pds over to producing jbuilder configs instead. But I’m a symptom of the problem rather than the solution unfortunately. I also use @c-cube’s containers, so yet another stdlib replacement/extension :)

                              1. 4

                                My personal view is that jbuilder is a mistake

                                Could you elaborate on why? IMO jbuilder is not perfect either but if we get a modern, documented build system which is hopefully easy to setup, it would be a massive win over all the other solutions we currently use.

                          2. 1

                            I agree, the different choices in tooling is sort of disorienting and it can lead to analysis-paralysis. For a toy compiler project I started working on, I tried to find the most basic tooling that would work: whatever ocaml compiler came with my distro, ocamlbuild, make, and then eventually, extlib, ocpindent, and then after some more time, opam, ocamlfind, utop. It may make sense to use the tooling outlined in this article if future maintainability is a big concern, but to get started and to learn ocaml, I don’t find it necessary (and definitely not appealing). Having done this, I don’t pine so much for standardization (;

                            1. 1

                              There’s more than one standard library in a lot of languages, though. Why does that bother you?

                              1. 4

                                It bothers me because it makes the language more difficult to learn. It also wasn’t always clear to me that an alternative was in use because, IIRC, they’re not (always) clearly namespaced. I have run into this in Haskell as well, FWIW.

                                1. 2

                                  Typically it’s visible when you use an alternative stdlib because you start your files with open Batteries or open Core or open Containers. I agree it’s annoying that the stdlib is not richer, and it’s a bit slow to accept contributions, but in a way the existence of alternative stdlibs/extensions shows how easy it is to roll your own :-)

                                2. 4

                                  You can’t have two standards, that’s a double standard!

                                  1. 1

                                    Which languages?

                                    1. 1

                                      Haskell, C, and D come to mind. You could also argue that Python has multiple standard libraries because it has different implementations that effectively can’t use some aspects of the normal stdlib (PyPy). Then there’s Java: SE, EE, and ME are the same language with different sets of functionality in the standard libraries.

                                  2. 1

                                    Out of curiosity, have you tried OP’s project setup?

                                    Also, there is only one OCaml standard library–the one that comes bundled with OCaml. The other ‘standard libraries’, Batteries Jane Street’s Core, are optional add-ons made for specific purposes.

                                    1. 2

                                      I haven’t tried OP’s setup, but honestly it seems even worse than what I had. I pretty much followed this: https://ocaml.org/learn/tutorials/get_up_and_running.html. I ended up using Oasis, which was just awful, every time I added a file or dependency I had to fiddle with the config until everything would build again, but at least there wasn’t an entirely separate language.

                                      From OP:

                                      (jbuild_version 1)
                                      
                                      (executable
                                        ((name main)                 ; The name of your entry file, minus the .ml
                                         (public_name OcamlTestProj) ; Whatever you like, as far as I can tell
                                         (libraries (lib))))         ; Express a dependency on the "lib" module
                                      

                                      Note the comment, “as far as I can tell”. To me, that’s a terrible sign. A person who has gone to a reasonable amount of effort to explain how to set up a project can’t even figure out the tooling completely.

                                      1. 2

                                        Jbuilder is quite nicely documented (see http://jbuilder.readthedocs.io/en/latest/). The public_name defines the name of the produced executable in the install context. It does not take much effort to read it from there

                                        1. 2

                                          Of course you still have to find out that Jbuilder exists, which the official site doesn’t seem to mention… I am lazy, I don’t like choices, I just want one, blessed tool that works more or less out-of-the-box if you follow a set of relatively simple rules (I’m even OK with wrapping the tool in a simple, handwritten Makefile, which is what I do in Go). I’m not arrogant enough to think that the way I prefer is the “right” way, in fact in some cases it would be dead wrong (like for extremely complex, multi-language software projects), but that explains why I dropped OCaml for hobby stuff.

                                          1. 1

                                            OK, but your criticism is that you have to find out that JBuilder exists, commenting on a post that tells you about JBuilder.

                                            1. 1

                                              To be fair, jbuilder is very young (not even 1.0 yet actually) but it might become the “standard” build tool the OCaml community has been waiting for for years (decades?). Then clearly there will be more doc and pointers towards it.

                                              1. 1

                                                Well obviously I know about it now, but it still isn’t terribly discoverable for someone new to the language. My actual point, and I probably didn’t make this as clear as I should have, sorry, is that in my experience OCaml isn’t very friendly to beginners, in part because its tooling story is kind of weak and fragmented.

                                                1. 2

                                                  Yeah. This is true. Especially on Windows. People are working on it but it’s slow and it’s taking time to consolidate all the disparate efforts. I myself am not getting terribly excited about OCaml native but funnily enough I am about BuckleScript (OCaml->JS compiler) because of its easy setup (npm i -g bs-platform) and incredible interop story.

                                                  Others are getting equally into ReasonML ( https://reasonml.github.io/ )because it’s coming from a single source (Facebook) is starting to build a compelling tooling/documentation story.

                                                  1. 2

                                                    I didn’t know about either of these, thanks!

                                          2. 1

                                            OP here: I didn’t really make any effort to pursue documentation re: the public_name field, and I have really almost no production experience with OCaml whatsoever. I certainly have complaints about OCaml’s tooling, but I can assure you that any argument against it appealing to my authority is certainly flawed.

                                            1. 1

                                              I wasn’t really appealing to your authority, in fact kind of the opposite. I don’t like using systems that converge to copy-paste magic, and that seems to be what you did, and is likely what I would do. I don’t want to use a weird programming language to configure my project, I want something simple, with happy defaults, that can be understood easily.

                                              I guess I generally prefer convention over configuration in this case, and that doesn’t seem to be what the OCaml community values, which is why I gave up on it. I’m not saying anyone is right or wrong, it’s just not a good fit for me, particularly for hobby projects.

                                      1. 2

                                        The reddit thread shows how much people are divided on the issue. I wonder if it’s along the lines of library writer (pro-generics) and app writer (no need for generics)?

                                        1. 2

                                          Outside of google, for frontend dev, I don’t see any point in using Dart over TypeScript. Pretty likely that even the type system will be better in TS.

                                          1. 5

                                            I can totally relate to that. I’ve been writing research-oriented software, very productively, but now I’m trying to help fellow researchers modify and adopt the code and it’s not always easy to realize how much I internalized the code. It’s 2 projects of > 30kloc OCaml and of course it must be a bit maze-like for them.

                                            1. 5

                                              It looks like D could be a really good language for most applications, in particular on linux. People seem to either keep C (unsafe, no abstraction) or python (slow and untyped)… At least D can be reasonably high level (like python) but still very performant. I’m just a bit pessimistic on the chances that languages that have been around for a while suddenly become popular.

                                              1. 6

                                                I’m just a bit pessimistic on the chances that languages that have been around for a while suddenly become popular.

                                                ironic for an ocaml person to say that :)

                                                1. 2

                                                  s/ironic/realistic/ ;-) I love OCaml, but I doubt it will ever become popular. Maybe the reason syntax (which is more C-like, something that can help a lot) will change that though, but I will not hold my breath.

                                                  1. 3

                                                    oh :) i was thinking of the way ocaml has suddenly seen a spike in popularity over the last few years - it will never be C-level popular, but it definitely feels like it has a lot of momentum and community activity it didn’t have for a long time.

                                                    1. 1

                                                      Indeed, some factors made this possible (better tooling with merlin, the opam package manager, …). The community is active, and more people have joined it, but it still is small.

                                                2. 3

                                                  in particular on linux.

                                                  I would love to see D as a viable alternative for Windows development as well, but since both dmd and ldc have a hard dependency on MSVC, I don’t see this to come soon; it makes crosscompilation from Linux quite difficult up to impossible. GDC might be able to fill this hole, but it still is a one-man show and will only very slowly evolve (not to mention what happens if the maintainer loses interest). Also, I have been told GDC produces giant executables for small programs, but that might improve more quickly.

                                                  For Linux, I think there are enough easily installable, modern alternatives to C/C++ that I don’t think that that’s a place where D could shine.

                                                  1. 1

                                                    I’m just a bit pessimistic on the chances that languages that have been around for a while suddenly become popular.

                                                    If you model language usage as a logistic curve then this scenario is perfectly realiseable.

                                                  1. 8

                                                    Idris looks really well designed, and I think these improvements are actually quite significant. Strictness by default is a game-changer for me; apparently the records and monads are more convenient to use (and there are effects, too? Not sure how experimental they are). If Idris was self-hosted, produced good static binaries with performance comparable to OCaml, and had a package manager I would definitely give it a serious try.

                                                    1. 6

                                                      The missing package manager is why I cling to Haskell. Right now I’ve worked through the Idris book and loved it, but it’s impractical to make a real program without a package manager.

                                                      1. 5

                                                        Idris also has a quite buggy implementation at the moment, but like everything else you mentioned, it is a solvable problem. I think it’s a contender for a widely used industrial language in the future. Though at the moment it’s mainly used by people with pretty sophisticated FP knowledge, I think its dependent types and effect system may ultimately become something that’s easier for newcomers to understand than a lot of Haskell is.

                                                        1. 7

                                                          They are pretty unapologetic about 1.0 not being industry-grade, and it is not quite the goal of the language:

                                                          Will version 1.0 be “Production Ready”?

                                                          Idris remains primarily a research tool, with the goals of exploring the possibilities of software development with dependent types, and particularly aiming to make theorem proving and verification techniques accessible to software developers in general. We’re not an Apple or a Google or [insert large software company here] so we can’t commit to supporting the language in the long term, or make any guarantees about the quality of the implementation. There’s certainly still plenty of things we’d like to do to make it better.

                                                          All that said, if you’re willing to get your hands dirty, or have any resources which you think can help us, please do get in touch!

                                                          They do give guarantees for 1.0:

                                                          Mostly, what we mean by calling a release “1.0” is that there are large parts of the language and libraries that we now consider stable, and we promise not to change them without also changing the version number appropriately. In particular, if you have a program which compiles with Idris 1.0 and its Prelude and Base libraries, you should also expect it to compile with any version 1.x. (Assuming you aren’t relying on the behaviour of a bug, at least :))

                                                          Don’t get me wrong, I believe Idris is a great language precisely because of that: they want to be primarily a research language, but provide a solid base for research happening on top of their core. They have a small team and use those resources well for one aspect of the language usage. I would highly recommend having a look at it and working with it, this is just something to be aware of.

                                                          from https://www.idris-lang.org/towards-version-1-0/

                                                          1. 5

                                                            Haskell is great because it’s where a lot of these ideas were tested and figured out. But it also has the cruft of legacy mistakes. Haskell can’t get rid of them now, but other languages can certainly learn from them.

                                                        1. 2

                                                          I’m a bit confused about the future of mono, now that .NET Core exists. Are both projects going to continue separately? I heard the .NET runtime was better than mono…

                                                          1. 1

                                                            This made even more confusing by the fact that the mono team adopted several libraries that were open sourced by MS. The line between MS’s and mono are pretty blurry these days.

                                                            1. 1

                                                              Okay, so basically everything is a mess from a naming perspective (which I’ll explain in a second), but the quick version is we still care about Mono.

                                                              Mono used to be two things: a VM and an implementation of the .NET Standard. .NET Core is also a VM and an implementation of .NET Standard.

                                                              Recently, Mono has begun ditching their libraries and instead using Microsoft’s directly. This actually began before .NET Core/.NET Standard was a thing, as Microsoft began open-sourcing more and more central .NET libraries, and it’s nowhere near done (Mono still has many of its own libraries that don’t exist for Microsoft VMs/has its own implementations of some libraries), but it seems clear that Mono is skating towards using Microsoft’s assemblies wherever possible.

                                                              Despite that work, Mono still has its own (very different) VM, and shows no sign of merging with .NET Core’s. This specifically means Mono can do three things that .NET Core cannot (yet): it can do ahead-of-time compilation so no JIT is required; it can target ARM; and, largely as a result of those two facts, it can run on Android and iOS. This means that, at least for now, Mono is still super-relevant due to its VM, if nothing else–and especially since having a concurrent GC is incredibly helpful to making a consumer application feel responsive, and because Mono is heavily used on mobile, I think this improvement will be incredibly welcome.

                                                              Long-term, I don’t see a huge future for Mono. .NET Core is playing with statically compiled executables (called .NET Native), and while it’s absolutely not ready for real use yet, I expect it to get there eventually. It’s only a matter of time before it works, they add ARM support, and they turn on the ability to target iOS and Android. But I still suspect Mono will be the go-to CLR on mobile for awhile.

                                                            1. 2

                                                              somehow I was expecting a Doom-based programming game ^^

                                                              1. 1

                                                                This was pretty interesting. The key seems to be a central coordinator making idempotent, commutative, and subsequently-cancellable requests to distributed services.

                                                                1. 2

                                                                  Wasn’t it? I like how both requests and cancellations are idempotent, but cancellations are not cancellable. It makes the whole system very monotonic, as in the 2-phase commit protocole they seem to draw inspiration from.

                                                                  1. 1

                                                                    Having given it some more thought, the commutative property of requests and compensating requests is both important and really tricky to get right. For example, from slides 54-59 of her presentation, these two sequences should result in the same outcomes:

                                                                    1. Book the Green Toyota for Mrs. Doe
                                                                    2. Cancel Mrs. Doe’s Green Toyota

                                                                    vs.

                                                                    1. Cancel Mrs. Doe’s Green Toyota
                                                                    2. Book the Green Toyota for Mrs. Doe

                                                                    But that second one is very non-intuitive, and forces the backing services to develop a more sophisticated resource model than just creating vs. deleting something from a datastore. Still, it suggests a lot of thorough and non-obvious tests to write.

                                                                    1. 2

                                                                      not sure it’s that much more complicated: any task could have a state which follows a simple state machine non-existent -> {ok | error | cancel} -> cancel. Then, a task could be initialized directly in the “cancel” state if the compensating request arrives first (meaning that the normal request will have no effect).

                                                                      1. 1

                                                                        Oh yeah, once you know that this is a requirement, it’s not hard. Knowing it’s needed and why is the sophistication!

                                                                      2. 1

                                                                        More sophisticated, sure, but I don’t think by very much. Just requires a “simplification” step before flushing the request queue into actual actions on the database

                                                                  1. 1

                                                                    For some reasons, “internet of surgeons” doesn’t sound like that good an idea now…