For Java I found it easiest to do this “from the other side”; there are maven plugins that will generate an adequate .deb that bundles all the jars into a folder in /usr/share and puts a launch script in /usr/bin and/or reasonable service/daemon configuration. Then the .deb is just built as part of the maven release (and uploaded to an internal apt repository).
A shaded executable jar is useful but not sufficient if you want to e.g. start the service on boot. So there’s some value in having an actual .deb. I was somewhat surprised by this approach when I first joined that company but it worked well in practice.
20k LoC single header file? NOPE.
-rw-r--r-- 1 fsaintjacques staff 666K Apr 19 09:30 nuklear.h
devilish indeed.
This is a personal and debatable opinion unrelated to C, but I find giant files to be unreadable from a diving-in-a-new-codebase standpoint. I consider it a good engineering practice to separate logical units in distinct files. The author has another project which does this: https://github.com/vurtun/mmx .
I think there’s a balance between micro and humongous source files. An analogy with writing, think about having paragraphs of one line versus one giant paragraph, you want neither.
With respect to my colleague here, it’s not actually a dealbreaker.
In fact, with the lack of a standardized package manager and build system, for C and C++ projects it is often preferable to simply have a source file or two for a neat feature. Other options are generally:
And then there is the joy of trying to actually step-through that garbage when debugging.
Or, you can just add a big honking header file like this, and move along–though I think it would’ve made more sense for it to be header+single source file. In this case, builds look normal, debugging is the same as for your own code, and everything can be simpler.
I don’t care that the distribution mechanism is a single header file. I care that the original code is a single header file. I’d also like to point that having a single header file does not relieve you from your duty of carefully setting the architecture, DEFINES and compiler flags when including this dependency in your projects.
Fair enough, fair enough. That said, for something like this, I’m more concerned about ease of integration than how clean the black box looks inside–hence, my preference for a single file.
Does anyone know if there’s a test tool to validate the behaviour?
EDIT: http://www.i3s.unice.fr/~jplozi/wastedcores/
Tools: [Available soon]
I’d like to point out that property testing is attainable in lower level languages, e.g. C/C++. See my data structures library with python bindings and using the excellent hypothesis library:
https://github.com/fsaintjacques/libtwiddle
and
https://github.com/fsaintjacques/libtwiddle/tree/develop/python/tests
Thank you DRMacIver.
There’s a nice C library for this sort of testing as well: https://github.com/silentbicycle/theft
Theft is great (sentences to take out of context…) but the major problem with it is that it doesn’t come with any sort of library of data generators or shrinkers, so it’s extremely DIY. Most of the work in doing this sort of thing is writing those generators and shrinkers, so it really helps to have a pre-built library of them rather than having to roll your own.
I’ll grant that rolling your own is very in the spirit of the language, but I’d still rather not do it if I don’t have to and I’m probably as close as it gets to being an expert in the subject. :-)
I did look into theft. But this is where I find scripting language much more pleasant to work with. Hypothesis comes with automatic ‘reduction’ functions, while in theft you have to implement all of this.
Writing the equivalent testing functionnality with theft would have probably taken as much code as the testee’s code.
Working on completing my SIMD implementation of libtwiddle, I’m tackling an interesting problem regarding sum and powers:
The high count for times that compdef is run indicates that this run is without a cache for compinit. The lack of calls to compdump also indicates that it isn’t creating a dump file. This cache, speeds compinit startup massively. I’m not quite sure how the blog post author has contrived to not have a dump file.
“contrived” is a rather strong word to use here and suggests intent (perhaps to deceive). I’m not sure if it was your intent or not.
This actually is with a dump file. A
.zcompdump-MACHINENAMEfile is created in~(see here).The issue is that this is recreated each time the shell starts up. There are multiple places in OMZ that call
compinit. In the additional reading at the bottom there is a link that modifies zsh to only recreate it once a day, but I still feel like that’s not ideal.Can you redefine
compinitto a no-op during OMZ loading, and then do it yourself at the end?IMO
zshrcshould explicitly callcompinitso it happens exactly once in a central location.Maybe he didn’t know?