1. 84
  1.  

  2. 16

    Oof, accessing out of bounds memory is pretty surprising to me for a dynamic language … But I guess it’s not surprising if your goal is to compile to fast native code (e.g. omit bounds checks).

    I don’t know that much about how Julia works, but I feel like once you go there, you need to have very high test coverage, and also run your tests in a mode that catches all bound errors at runtime. (they don’t have this?)

    Basically it’s negligent not to use ASAN/Valgrind with C/C++ these days. You can shake dozens or hundreds of bugs out of any real codebase that doesn’t use them, guaranteed.

    Similarly if people are just writing “fast” Julia code without good tests (which I’m not sure about but this article seems to imply), then I’d say that’s similarly negligent.


    I’ve also learned the hard way that composability and correctness are very difficult aspects of language design. There is an interesting tradeoff here between code reuse with multiple dispatch / implicit interfaces and correctness. I would say they are solving O(M x N) problems, but that is very difficult, similar how the design of the C++ STL is very difficult and doesn’t compose in certain ways.

    1. 8

      I haven’t tested it, but I also wondered “how can this be?” You can launch Julia as julia --check-bounds=yes which should override the @inbounds disabling of bounds checking.

      If that works, that @inbounds bugs of the original article persist for many years in spite of this “flip a switch and find them” probably says the issue is more the “culture”. People often confuse culture and PLs, but it is true that as a consumer (who does not write all their own code) both matter.

      1. 5

        Yeah one thing I would add that’s not quite obvious is that you likely need “redundant” tests for both libraries and APPLICATIONS.

        This is because composing libraries is an application-specific concern, and can be done incorrectly With Julia’s generality and dynamic nature, that concern is magnified.

        Again I’d make an analogy to C++ STL – you can test with one set of template instantiations, e.g. myfunction<int, float>. But that doesn’t mean myfunction<int, int> works at all! Let alone myfunction<TypeMyAppCreatedWhichLibraryAuthorsCannotKnowAbout, int>.

        In C++ it might fail to compile, which is good. But it also might fail at runtime. In a dynamic language you only have the option of failing at runtime.


        I have a habit that I think is not super common – I write tests for the libraries I use, motivated by the particular use cases of my application. I also avoid package managers that pull in transitive dependencies (e.g. npm, Cargo, etc.)

        But yeah it sounds like there is a cultural change needed. I have some direct experience with an open source C maintainer rejecting ASAN changes… simply due to ignorance. It can be hard to change the “memes”.

        So to summarize I would say that in Julia it’s not enough for libraries to run tests with --check-bounds=yes – applications also need tests that run with it. And the tests should hit all the library code paths that the application hits.

        1. 6

          Any Julia project (library or application) that has tests and runs them in the standard way will run them with bounds checking turned on.

          The issue was that Yuri was using a type with an indexing scheme the author hadnt expected, and that scenario was not tested.

          1. 4

            Yeah so then I think it boils down to the (slightly non-obvious) practice of applications writing tests for libraries they depend on.

            This is not limited to Julia – I do it all the time, e.g. for C and C++. But it does seem more crucial because of the power of the language.

            1. 2

              I think my take is that Julia libraries should do (more) proptesting/fuzzing and that type of array and type of number should be some of the dimensions that vary.

              Tho I also agree that you should test the combinations of libraries you use in experimental work, and perhaps the relative immaturity of Julia’s ecosystem means that extends to quite a lot of projects in Julia.

          2. 4

            So to summarize I would say that in Julia […] applications also need tests that […] hit all the library code paths that the application hits.

            You are probably correct; this sounds like a well informed opinion.

            It is also great information warning off any would-be users of Julia. Knowing this is a legitimate best practice in the language ecosystem there is no way I would even consider getting entangled.

            1. 4

              I haven’t used Julia much, but for some problems your only real options are Matlab and Julia (excluding C++ and Fortran). If I had those types of problems I would pick Julia!

              (On the other hand, I pick R for data manipulation and stats. Python and Julia are both trying to port R’s abstractions in that domain, but R is where they originated, and I think they’re the best.)

              FWIW I do think Julia has some really good language design decisions. For example I borrowed their whitespace stripping rule for multi-line strings in Oil.

              And there is a huge UPSIDE to multiple dispatch – but there is also a downside. I think it can be mitigated with culture.

          3. 3

            Bounds checking is turned on by default when running package tests. The issue is that the bounds are not broken for regular arrays. If some tests were written with OffsetArrays then the errors would have been seen.

            There’s also the context that many of the affected packages were written before Julia supported arrays that are not indexed from 1 and were not updated (to be fair, not that many people use weirdly indexed arrays).

            1. 4

              It sounds like they need a “meta-test” aka a “test linter” to validate that “tricky” types are tested. This gets “AI complete” quickly, of course, but just taking a list of types with one that includes OffsetArray { EDIT: or your new EvilArrays} by default to “check that you test” might not be a bad start.. :) Maybe this also already exists but is not used?

              “Tools that could help were they only used but they aren’t” sounds like a cultural problem to me. As I said, that does not make it an unreal problem. Academic coders can be especially difficult to habituate to any sort of discipline. In fact, I might say it’s a harder to solve problem than more technical things. Many can just “graduate and move on”.

              I cannot find it now, but there was a recent article about “what programming languages usher/cajole you into doing” mattering. As with any culture, there is also “what kind of people a PL attracts”. They relate. Julia has been REPL/JIT compile focused from the start (to displace Matlab/Python). People who prioritize that kind of development over Text Editor/Version Controlled/Test-Driven|Regulated dev are simply different. {EDIT: or it could be the same person wearing different hats/being in a different mindset…Humans are complicated. :-) }

          4. 3

            FWIW I feel my message led to some “piling on”, and not all the replies understood the difficult language design issues here.

            So I posted this counterpoint: The Unreasonable Effectiveness of Multiple Dispatch (JuliaCon 2019)

            I would rather work with a language that composes, but where you have to be slightly careful. (Shell is a lot like this.)

            As opposed to working with QUADRATIC AMOUNTS of brittle, poorly composing, bespoke code. That leads to a different explosion in the test matrix, which makes programs even more unreliable.


            I think the Julia designers did something very difficult, with huge benefits. But there is a downside, which is very well expressed by this post.

            But I think that solving / mitigating O(M x N) code explosions is the most important and most difficult thing in language design. Drafts here related to “narrow waists”:

            https://oilshell.zulipchat.com/#narrow/stream/266575-blog-ideas/topic/Solving.20M.20x.20N.20code.20explosions.20is.20the.20most.20important.20thing (requires login)

            In other words, in all good languages, everything is an X. For Julia that’s fine grained and dynamically typed pure data, which can be operated on functions with multiple dispatch, and which can be compiled to efficient code.

          5. 10

            I have a question after reading this. Are Julia’s correctness problems a result of language features leading to irreconcilable errors? Or is it more that the community disregards correctness? Or a lack of certain correctness-assisting foundations?

            1. 12

              This passage towards the end makes me think it’s about the foundations:

              Given Julia’s extreme generality it is not obvious to me that the correctness problems can be solved. Julia has no formal notion of interfaces, generic functions tend to leave their semantics unspecified in edge cases, and the nature of many common implicit interfaces has not been made precise (for example, there is no agreement in the Julia community on what a number is).

              The Julia community is full of capable and talented people who are generous with their time, work, and expertise. But systemic problems like this can rarely be solved from the bottom up, and my sense is that the project leadership does not agree that there is a serious correctness problem. They accept the existence of individual isolated issues, but not the pattern that those issues imply.

              1. 3

                I guess I’m wondering if the problems could be addressed by adding formal interfaces and creating definitions for the implicit ones like number, if the undefined semantics of generic functions can be defined, if the problems can be addressed these ways or not.

              2. 4

                This seems to be a typical engineering issue. Take the example of array indexing base, one part of community producing packages using an interface that is base agnostic, though another part assumes 1-based arrays, yet another part of the community assumes 0-based. As there is no constraint to Julia’s generic API, any packages are compatible as long as they implement a set of functions, but there is not enough effort in making sure the packages use most generic form and are error free in edge cases.

                This could happen in any language that allows overloading. If in C++ one package has a sum function that accumulates values indexed with operator[] from 0 to array.size()-1, and another package has a custom array implementation that has an operator[] works from 1 to array.size() without bounds checking, using the sum from the first package for the array in the second package would compile but would be definitely wrong. I don’t think it’s either the first package’s fault or the second package’s fault. I could implement a sparse array implementation that has an operator[] but only work on even indices, and that could easily trip anybody try to use any generic function supplied by any other package assuming operator[] accepting all integer numbers.

                There would be two solutions. One would be a language level check relying on types, though that’s likely very restrictive and won’t work for all the cases. The other would be our usual way, whoever is using packages should test extensively and make sure the packages they are using are compatible.

                Now go back to Julia specifically, I believe the promise of generic interface is exaggerated, and packages really shouldn’t promise to be compatible with any other random packages throw at them. Something like this kind of issue, https://github.com/JuliaStats/Distances.jl/issues/201, is easy to fix, but increases a lot of burden to developers. Should Distances.jl always work for any custom made Array interface? Here the fixes only include log metric. What if I have a package that has an array on Lie Algebra, or a fibre bundle of Lie Algebra on Riemannian manifold? Is it reasonable to make the official Array interface understand fibre bundles?

              3. 9

                I can definitely +1 the code quality issues around community packages. The core lang core quality is pretty good, but there’s so many projects that are one-offs written by an academic, and these become core libraries for other more complex libraries!

                1. 4

                  That’s because they alienate professional engineers by not fixing the warm-up issue, the compiled exe size, and other such earthly considerations. Too bad too, because the language itself is beautifully designed.

                  1. 6

                    I think that’s a bit unfair. If those issues were easily solved then they would have been already. The core devs have put a lot of effort into both of those issues, and things are getting better (as of v1.8 you can now opt-out of bundling llvm with your binary and can strip symbols and IR for smaller binaries; and there have been loads of changes over the last few years to reduce warm-up time (really, compilation time)).

                    1. 1

                      I don’t mean to be unfair. I’m sure there are difficulties in achieving those goals.

                      My impression is that these problems could have been solved, had they been prioritized.

                      Either way, intentions aside, the result is that Julia currently isn’t practical for most real-world software, i.e. that involves distributing to users, and providing a reasonable user experience. I’m fairly certain it’s a big factor in the immaturity of the ecosystem.

                      1. 2

                        I completely disagree. Julia is extremely practical and it’s replacing a TON of R / Python / Fortran code. Especially in the HFT / ML / Bioinformatics worlds.

                    2. 2

                      I don’t think that’s true. There’s a fair number of professional engineers working on libraries in the space. The problem is that you have very smart academics who are writing one-off libraries to finish their research, then no one wants to maintain it.

                  2. 8

                    Reading through those issues, there seems to be a lot of really surprising issues! And a lot of them, when digging into the fix, seem to be related to code that is trying to be performant.

                    This sort of culture difference is something I’ve seen a lot. In general I try to write correct, easy to review code (at the expense of performance/work duplication). I don’t always succeed (far from it!) but I at least think I have a sense for stuff that is easier or harder to get right.

                    Here, for example, the way Julia tries to figure out the day of the year is through this builtin lookup table. While this feels like the right/most performant solution (and despite leap years being a constant thing, it is easy to forget about them), my dumb first pass at this kind of problem would be to instead do something like start_of_quarter = quarterstart(date); (date-start_of_quarter).days. Definitely less performant! But I’m way more confident in it being correct.

                    I think Python might have an advantage here because it has a lot of non-performant helpers that are fun to write with, so you’re incentivised to write correct (if slower) code.

                    “Code shouldn’t rely on bit twiddling” feels like a hard sell in a language where performance is one of the selling points of the language, but “bugs are very bad! Let’s try to write code that’s as obviously correct as possible” feels like a decent strategy to me.

                    1. 4

                      For a young language, problems and bugs in the base implementations seem quite reasonable.

                      Problems in community libraries, equally young, I think are similarly reasonable.

                      If these problems are found, addressed, and never come up again, it would seem the language is maturing, and going in the right direction.

                      1. 2

                        It would also seem such a young and raw language is not ready for production use.

                        1. 10

                          Since the language is designed for data-crunching on your workstation etc, “production” use is pretty vague an idea.

                          1. 2

                            The ‘etc’ seems to be doing a lot of work, but I’ll hazard a guess that Julia was designed for more than workstation data crunching, and the intent was to productionize into applications. At least, that’s what I get from blurb on the home page:

                            • ‘Julia was designed from the beginning for high performance. Julia programs compile to efficient native code…’
                            • ‘One can build entire Applications and Microservices in Julia.’
                      2. 2

                        There’s a 2014 blog post from Dan Luu to roughly the same effect. It sounds like the issues we much worse at that point but the root cultural problems may be unchanged.

                        That post was https://danluu.com/julialang/ and it was discussed here at https://lobste.rs/s/omdncv/review_julia_language_2014

                        1. 2

                          Tangentially related, does anyone have a recommendation for a quick (to write in and execute) language for processing data in parallel? I find that I often need to do things like parse 100 JSON files and do some calculations from the data in each file. I normally write my data processing scripts in Python, but it is bad at parallel processing. I head Julia is good at that task but the one time I tested with it, the performance wasn’t anything special. I was thinking of trying Julia again and/or the flow package in Elixir, but was wondering if anyone had opinions.

                          1. 3

                            I’ve had luck in python by parallelizing using asyncio and ProcessPoolExecutor. asyncio is very ergonomic IMO, and by using ProcessPoolExecutor you can sidestep the GIL problem.

                            1. 1

                              Thanks, I haven’t used asyncio much before. I check it out!

                            2. 3

                              The Python multiprocessing module?

                              from multiprocessing import Pool, cpu_count; import os
                              p = Pool(int(os.environ.get("PAR", cpu_count())))
                              def doIt(jsonPath): return True # replace w/app logic
                              for res in p.imap_unordered(doIt, paths): print(res)
                              
                              1. 3

                                Thanks! This is what I have currently been using, but I heard there are some potential bugs with this approach. After looking into it a little more today, it looks like using with get_context("spawn").Pool() as pool: should help avoid some bugs/pitfalls.

                              2. 2

                                So overkill to gonlook for another language.

                                Write your python script so it gets the input filename as a command line argument. List the files with ls and pipe to xargs taking just one argument at a time and setting the parallelism to whatever number of processes you find reasonable. Google: “xargs number of processes” for examples.

                                Of if you don’t like shellscripts, create a process pool with the multiprocessing python module included in the atabdar library.

                                Python’s multithreaded is arguable not too efficient due to the infamous GIL. But AFAIK, it does multi processing like any other language.