1. 1

    The high count for times that compdef is run indicates that this run is without a cache for compinit. The lack of calls to compdump also indicates that it isn’t creating a dump file. This cache, speeds compinit startup massively. I’m not quite sure how the blog post author has contrived to not have a dump file.

    1. 4

      “contrived” is a rather strong word to use here and suggests intent (perhaps to deceive). I’m not sure if it was your intent or not.

      1. 3

        This actually is with a dump file. A .zcompdump-MACHINENAME file is created in ~ (see here).

        The issue is that this is recreated each time the shell starts up. There are multiple places in OMZ that call compinit. In the additional reading at the bottom there is a link that modifies zsh to only recreate it once a day, but I still feel like that’s not ideal.

        1. 2

          Can you redefine compinit to a no-op during OMZ loading, and then do it yourself at the end?

          IMO zshrc should explicitly call compinit so it happens exactly once in a central location.

        2. 2

          Maybe he didn’t know?

        1. 4

          I wish I could upvote more than once. Very neat tool.

          1. 2

            For Java I found it easiest to do this “from the other side”; there are maven plugins that will generate an adequate .deb that bundles all the jars into a folder in /usr/share and puts a launch script in /usr/bin and/or reasonable service/daemon configuration. Then the .deb is just built as part of the maven release (and uploaded to an internal apt repository).

            1. 1

              I find fat-jar practical.

              1. 1

                A shaded executable jar is useful but not sufficient if you want to e.g. start the service on boot. So there’s some value in having an actual .deb. I was somewhat surprised by this approach when I first joined that company but it worked well in practice.

                1. 1

                  Right, we use fat-jar + start scripts bundled in self-contained archive files.

            1. 5

              Wouldn’t proper compilers flag detect those invalidated checks?

              1. 2

                Yes. With GCC I think it’s checked with -Wextra

              1. 7

                20k LoC single header file? NOPE.

                -rw-r--r--   1 fsaintjacques staff 666K Apr 19 09:30 nuklear.h
                

                devilish indeed.

                1. 1

                  Could you elaborate for those of us less familiar with C? Why is that a dealbreaker?

                  1. 2

                    This is a personal and debatable opinion unrelated to C, but I find giant files to be unreadable from a diving-in-a-new-codebase standpoint. I consider it a good engineering practice to separate logical units in distinct files. The author has another project which does this: https://github.com/vurtun/mmx .

                    I think there’s a balance between micro and humongous source files. An analogy with writing, think about having paragraphs of one line versus one giant paragraph, you want neither.

                    1. 2

                      With respect to my colleague here, it’s not actually a dealbreaker.

                      In fact, with the lack of a standardized package manager and build system, for C and C++ projects it is often preferable to simply have a source file or two for a neat feature. Other options are generally:

                      • Rely on the system headers and libraries to exist/be the correct version (via aptitude or whatever)
                      • Bundle the full source files in a lib/vendor directory and tweak the build system to build them too if needed (for say dynamic libs)
                      • Carefully package up the library headers and static libs, and hope you’re building with the correct versions (architecture, debug settings, linkage, etc.)

                      And then there is the joy of trying to actually step-through that garbage when debugging.

                      Or, you can just add a big honking header file like this, and move along–though I think it would’ve made more sense for it to be header+single source file. In this case, builds look normal, debugging is the same as for your own code, and everything can be simpler.

                      1. 2

                        I don’t care that the distribution mechanism is a single header file. I care that the original code is a single header file. I’d also like to point that having a single header file does not relieve you from your duty of carefully setting the architecture, DEFINES and compiler flags when including this dependency in your projects.

                        1. 1

                          Fair enough, fair enough. That said, for something like this, I’m more concerned about ease of integration than how clean the black box looks inside–hence, my preference for a single file.

                  1. 6

                    Does anyone know if there’s a test tool to validate the behaviour?

                    EDIT: http://www.i3s.unice.fr/~jplozi/wastedcores/

                    Tools: [Available soon]

                    1. 2

                      Missing one thing: what the hell is Pony? There’s no link to any sort of home page anywhere.

                      1. 5

                        This thing has the visual satisfaction of watching a disk defragger run.

                        1. 3

                          And you can tweak it for optimal convergence!

                          1. 6

                            I’d like to point out that property testing is attainable in lower level languages, e.g. C/C++. See my data structures library with python bindings and using the excellent hypothesis library:

                            https://github.com/fsaintjacques/libtwiddle

                            and

                            https://github.com/fsaintjacques/libtwiddle/tree/develop/python/tests

                            Thank you DRMacIver.

                            1. 1

                              There’s a nice C library for this sort of testing as well: https://github.com/silentbicycle/theft

                              1. 5

                                Theft is great (sentences to take out of context…) but the major problem with it is that it doesn’t come with any sort of library of data generators or shrinkers, so it’s extremely DIY. Most of the work in doing this sort of thing is writing those generators and shrinkers, so it really helps to have a pre-built library of them rather than having to roll your own.

                                I’ll grant that rolling your own is very in the spirit of the language, but I’d still rather not do it if I don’t have to and I’m probably as close as it gets to being an expert in the subject. :-)

                                1. 2

                                  I did look into theft. But this is where I find scripting language much more pleasant to work with. Hypothesis comes with automatic ‘reduction’ functions, while in theft you have to implement all of this.

                                  Writing the equivalent testing functionnality with theft would have probably taken as much code as the testee’s code.

                              1. 3

                                Working on completing my SIMD implementation of libtwiddle, I’m tackling an interesting problem regarding sum and powers:

                                https://github.com/fsaintjacques/libtwiddle/blob/feature/simd-support/src/twiddle/hyperloglog/hyperloglog.c#L105-L111