GNU Autotools: just kill this horrific pile of garbage with fire. Especially terrible when libtool is used. Related: classic PHK rant.
CMake: slightly weird language (at least a real language which is miles ahead of autocraptools), bad documentation.
Meson: somewhat inflexible (you can’t even set global options like b_lundef conditionally in the script!) but mostly great.
GYP: JSON files with conditions as strings?! Are you serious?
Gradle: rather slow and heavy, and the structure/API seems pretty complex.
Bazel/Buck/Pants (nearly the same thing): huge mega build systems for multiple languages that take over everything, often with little respect for these langauges’ build/package ecosystems. Does anyone outside Googlefacetwitter care about this?
Grunt, Rake, many others: good task runners, but they’re not build systems. Do not use them to build.
Related: classic PHK rant.
This one is even better since its observations apply to even more FOSS than libtool. It also has some laughable details on that along with the person who wrote libtool apologizing in the comments IIRC.
FWIW: bazelbuckpants seem to be written for the proprietary software world: a place where people are hesitant to depend on open-source dependencies in general, and people have a real fear (maybe fear is strong, but still) of their dependencies and environment breaking their build. I use them when I’m consulting, because I can be relatively certain that the build will be exactly the same in a year or so and I don’t like having to fix compilation errors in software I wrote a year ago.
I’m with you on Grunt, but Rake is actually a build tool with Make-style rules and recipes for building and rebuilding files when their dependencies change. There’s a case that Rake is just Make ported to Ruby syntax. It’s just more commonly used as a basic task runner.
I think Make is also somewhat close to a task runner. It has dependencies, but not much else. You write compiler invocations manually…
It sort of has default rules for building a number of languages, though these aren’t terribly helpful anymore.
I also use Make as task runner. Mostly to execute the actual build system, because everybody knows how to run make and most relevant systems probably have Make installed in one form or another.
We use Pants here at Square, in our Java monorepo. It works quite nicely, actually. For our Go monorepo, we just use standard Go tooling, but I’ve volunteered to convert to Pants if anyone can get everyone to move to a single monorepo. They won’t, because every Rails project has its own repo, and the Rails folks like it that way.
Only vaguely on topic, but a friend once made a nice comparison that illuminates the distinction between dynamic/lexical scoping using a shell/process analogy: command line arguments behave like they’re lexically scoped, and environment variables behave like they’re dynamically scoped.
It’s the filesystem location specified in the GNU Coding Standard
Executable programs are installed in one of the following directories.
bindir: The directory for installing executable programs that users can run. This should normally be /usr/local/bin, but write it as $(exec_prefix)/bin.
Packages should never be installed to /usr/local
https://wiki.archlinux.org/index.php/arch_packaging_standards
Arch users expect packages to install in /usr, so it makes more sense to follow the Arch packaging standards here.
GNU expects downstream packagers (“installers”) to change the install location, which is why the prefix variable exists. /usr/local/ is an appropriate default for “from-source” installs, to avoid conflicts with packages.
Some people want easy access to the benefits of containerization such as: resource limits, network isolation, privsep, capabilities, etc. Docker is one system that makes that all relatively easy to configure, and utilize.
Docker is one system that makes me wish Solaris Zones took off, which had all of that, but without the VM.
Docker hasn’t used LXC on Linux in a while. It uses its own libcontainer which sets up the Linux namespaces and cgroups.
This is the correct answer. It’s a silly question. Docker has nothing to do with fat binaries. It’s all about creating containers for security purposes. That’s it. It’s about security. You can’t have security with a bunch of fat binaries unless you use a custom jail, and jails are complicated to configure. You have to do it manually for each one. Containers just work.
security
That is definitely not why I use it. I use it for managing many projects (go, python, php, rails, emberjs, etc) with many different dependencies. Docker makes managing all this in development very easy and organized.
I don’t use it thinking I’m getting any added security.
I don’t use it thinking I’m getting any added security.
The question was “Why would anyone choose Docker over fat binaries?”
You could use fat binaries of the AppImage variety to get the same, and probably better organization.
Maybe if AppImages could be automatically restricted with firejail-type stuff they would be equivalent. I just haven’t seen many developers making their apps that way. Containers let you deal with apps that don’t create AppImages.
Interesting. So in effect you wish to “scope” portions for “protected” or “limited” use in a “fat binarie”. As opposed to the wide open scope implicit in static linking?
So we have symbol resolution by simply satisfying an external, resolution by explicit dynamic binding (dynload call), or chains of these connected together? These are all the cases, right?
We’d get the static cases handled via the linker, and the dynamic cases through either the dynamic loading functions or possibly wrapping the mmap calls they use.
That sounds genuine.
So I get that its one place, already working, to put all the parts in one place. I buy that.
So in this case, it’s not so much Docker as Docker, as it is a means to an end. This answers my question well, thank you. Any arguments to the contrary with this? Please?
This answers my question well, thank you. Any arguments to the contrary with this? Please?
While I think @adamrt is genuine, I’m interested in seeing how it pans out over the long run. My, limited, experience with Docker has been:
I suspect the last point is going to lead to many “we have this thing that runs but don’t know how to make it again so just don’t touch it and let’s invest in not touching” situations. People that are thoughtful and make conscious decisions will love containers. People inheriting someone’s lack of thoughtfulness are going to be miserable. But time will tell.
Well these aren’t arguments to the contrary but accurate issues with Docker that I can confirm as well. Thank you for detailing them.
I think there’s something more to it than that. On Solaris and SmartOS, you can have security/isolation with either approach. Individual binaries have privileges, or you can use Zones (a container technology). Isolating a fat binary using ppriv is if anything less complicated to configure than Zones. Yet people still use Zones…
I thought it was about better managing infrastructure. Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it). When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.
What’s the current state in security versus VM’s on something like Xen or a separation kernel like LynxSecure or INTEGRITY-178B?
Correct. It is unclear the compartmentalization aspect of containers to security specially.
I’ve implemented TSEC Orange Book Class B2/B3 systems with labelling, and worked with Class A hardware systems that had provable security at the memory cycle level. Even these had intrusion evaluations that didn’t close, but at least the models showed the bright line of where the actual value of security was delivered, as opposed to a loose, vague concept of security present as a defense here of security.
FWIW, what the actual objective that the framers of that security model was, was program verifiable object oriented programming model to limit information leakage in programming environments that let programs “leak” trusted information to trusted channels.
You can embed crypto objects inside an executable container and that would deliver a better security model w/o additional containers, because then you deal with issues involving key distribution w/o having the additional leakage of the intervening loss of the additional intracontainer references that are necessary for same.
So again I’m looking for where’s the beef instead of the existing marketing buzz that makes people feel good/scure because they use the stuff that’s cool of the moment. I’m all ears for a good argument for all this things, I really am, … but I’m not hearing it yet.
Thanks to Lobsters, I already met people that worked in capability companies such as that behind KeyKOS and E. Then, heard from one from SecureWare who had eye opening information. Now, someone that worked on the MLS systems I’ve been studying a long time. I wonder if it was SCOMP/STOP, GEMSOS, or LOCK since your memory cycle statement is ambiguous. I’m thinking STOP at least once since you said B3. Do send me an email to address in my profile as I rarely meet folks knowledgeable about high-assurance security period much less that worked on systems I’ve studied for a long time at a distance. I stay overloaded but I’ll try to squeeze some time in my schedule for those discussions esp on old versus current.
thought it was about better managing infrastructure.
I mean, yes, it does that as well, and you’re right, a lot of people use it just for that purpose.
However, you can also manage infrastructure quite well without containers by using something like Ansible to manage and deploy your services without overhead.
So what’s the benefit of Docker over that approach? Well… I think it’s security through isolation, and not much else.
Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it).
Yes, but that’s where capabilities kicks in. In Docker you can run a process as root and still restrict its abilities.
Edit: if you’re referring to the dockerd daemon which runs as root, well, yes, that is a concern, and some people, like Jessie Frazelle, hack together stuff to get “rootless container” setups.
When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.
Like any security tool, there’s ways of misusing it / doing it wrong, I’m sure.
According to Jessie Frazelle, Linux containers are not designed to be secure: https://blog.jessfraz.com/post/containers-zones-jails-vms/
Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.
If you believe @bcantrill, he claims that the container revolution is driven by developers being faster, not necessarily more secure.
According to Jessie Frazelle, Linux containers are not designed to be secure:
Out of context it sounds to me like you’re saying “containers are not secure”, which is not what Jessie was saying.
In context, to someone who read the entire post, it was more like, “Linux containers are not all-in-one solutions like FreeBSD jails, and because they consist of components that must be properly put together, it is possible that they can be put together incorrectly in an insecure manner.”
Oh sure, I agree with that.
Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.
That has exactly nothing (?) to do with the conversation? Ask FreeBSD why people aren’t using it as much as linux, but leave that convo for a different thread.
That has exactly nothing (?) to do with the conversation?
I’m not sure how the secure part has nothing to do with the conversation since the comment this is responding to is you saying that security is the reason people use containers/Docker on Linux. I understood that as you implying that was the game change. My experience is that it has nothing to do with security, it’s about developer experience. I pointed to FreeBSD and Solaris as examples of technologies that had secure containers long ago, but they did not have a great developer story. So I think your believe that security is the driver for adoption is incorrect.
Yes. Agree not to discuss more on this thread, … but … jails both too powerful and not enough at the same time.
Generally when you add complexity to any system, you decrease its scope of security, because you’ve increased the footprint that can be attacked.
I’d suggest using one of the Bash clients.
Note that if you follow their installation instructions you’ll still need to set permissions via sudo chmod 755 /usr/local/bin/tldr or similar.
sudo chmod +x $location is in their instructions. I suppose you’d need the mode fully spelled out if you have a conservative umask (like the popular choice 077) and aren’t running tldr from a root shell.
They completely blew it. All those polls were for nothing. It’s just a T470 with a 7 row keyboard: http://www.omgubuntu.co.uk/2017/09/retro-thinkpad-image-2017
It’s the same crappy 16:9 screen like all the other laptops today. I hear all the time that nobody makes 4:3 or 16:10 screens anymore, yet Apple sells 12.9’’ and 10.5’’ 4:3 tablets and 12’’, 13.3’’, and 15.4’’ 16:10 laptops without any issue. All with high quality IPS screens, most of them wide gamut even.
Poor chassis choice. Retro ThinkPad enthuiasts are either into the smaller models (600X, 701C, X300, the entire X series for that matter) or the really huge ones (A-series, 700 series) - making it T470 based makes no one happy.
Um, plenty of my “retro ThinkPad enthusiast” friends love their T40 and T60 series models, which are comparable in physical size to today’s T400 series models (~13in by ~9in).
Speaking for myself—who got into the game too late to be much a “retro ThinkPad enthusiast”—I like my X220. I don’t especially care for my T450s daily driver, but that has little to do with the form factor (which I think is fine), and everything to do with the keyboard, lack of ThinkLight + lid latch + ports in the back, and the 16:9 (rather than 16:10) display.
If the “ThinkPad Retro” had a 16:10 display, it would be perfect for me, modulo what we haven’t heard about what ports it’ll have. By the way, since it hasn’t been linked in this subthread, here is the image of the “ThinkPad Retro” that Lenovo accidentally leaked.
I agree with you - though I had older Thinkpads, including the X60s. I must say, that I love my T450s a lot - I like the form factor of the laptop, but I would prefer a better keyboard and more vertical screen. I have to say that I started to like to have that much horizontal space, as I am able to edit two programs side by side.
Thanks for the photo - I like the keyboard, especially the big ESC key :-)
FWIW I edit three files side by side in my 15” MacBook Pro, and I still have better vertical space than the T450 in about the same physical package.
I do not know anything about MacBooks, but the T450s, is only 14”, so it’s normal that a 15” has more vertical and horizontal space. To have a 15” in the same overal size like the T450s, this sounds very interesting. But in any case Mac will never be for me - I need the TrackPoint, a good keyboard and Linux.
The worst part is that when sales are slow, they’re going to blame it on “oh, the enthusiast market isn’t willing to put their money where their mouth is” instead of on the horrible screen, and we won’t see things like this happen again.
I don’t think it’s that big of a deal. I agree with a comment on the post.
If people use random TLDs for testing then that’s just bad practice and should be considered broken anyway.
At least the developer tools (like pow, puma-dev) which do squat on *.dev will now be compelled to support “turnkey https” out of the box, or risk losing many of their users.
Well, since .dev is a real domain, what I actually suspect will happen is they’ll just switch to something else. Which, to be honest, I’d prefer: I’m all for HTTPS everywhere, but on localhost, when doing dev, it’s not worth it 99.9% of the time (and it robs me of tools like nc and tcpdump to casually help with certain issues).
Very useful if your asset names contain version info (e.g., commit hash). It’s also a surprisingly big win for social sites where people click refresh often. See https://hacks.mozilla.org/2017/01/using-immutable-caching-to-speed-up-the-web/
For toying with it, I set up this demo page at https://immutable.on.web.security.plumbing/
Shameless plug, I wrote a toy pastebin server called yasuc, whose pastes are identified (and located) by the SHA-256 of their contents. Immutable responses would be most excellent for this use case.
Save yourself the trouble and use Autotools: https://autotools.io/index.html
spits out water in disbelief, record scratch to silence – the more common position that I hear is “autotools” is not very “auto”, and in fact more trouble than it’s worth.
the more common position that I hear is “autotools” is not very “auto”, and in fact more trouble than it’s worth
And it usually comes from those that never needed complicated things from their build system. The ones jumping at the chance of using some exciting new build system with a quarter of Autotools’ functionality because all they do is the simple stuff.
Even if that’s you right now, pay some respect to the old software gods. You’re going to need them when the going gets tough.
Personally, I’m in the camp of using make, and would probably reach for autotools when the going got tough. But build tools seem to be becoming as controversial, in some circles, as using C instead of the “safer” rust.
Personally, I’m in the camp of using make, and would probably reach for autotools when the going got tough.
Why not do it the right way from the start and get most of the portability for free?
Autotools definately has its uses, but part of the reason for some of the new build systems is that portable sometimes means Windows (MSVC), OS X, and Linux, rather than POSIX.
Autotools definately has its uses, but part of the reason for some of the new build systems is that portable sometimes means Windows (MSVC), OS X, and Linux, rather than POSIX.
Autotools support all these operating systems: https://github.com/opentoonz/GTS/pull/4
I did the Linux port and got an OS X port for free (and probably a Windows port through Cygwin or MinGW).
While I appreciate what you are trying to say, I stuck the MSVC after Windows because that is the reality that many Windows developers have to deal with. Using a POSIX emulation environment like Cygwin or MSYS2 is not always an option. The OP article also mentions that if you are using Makefiles you will (most likely) need a separate one to support MSVC.
I’ll ask the question I ask everytime autotools comes up: Can you name some of this functionality that autotools provides – which is actually still useful today?
Respect is all very well, but people should know why they use the tools they use, and not just cargo cult things long after they’re obsolete.
I’ll ask the question I ask everytime autotools comes up: Can you name some of this functionality that autotools provides – which is actually still useful today?
Here’s one for you - it makes the hard things possible. Hard things like combining Go and C code in a library: https://github.com/stefantalpalaru/golib/blob/master/Makefile.am
What is autotools doing here, as regards the problem of linking Go and C object files together, that vanilla GNU/BSD make couldn’t?
If you want to limit the conversation to things which produce Makefiles, such a thing is fairly easily done in cmake, too (or, in the Rust world, with cargo). And it would likely “just work” as well with cmake’s ninja generator.
What is autotools doing here, as regards the problem of linking Go and C object files together, that vanilla GNU/BSD make couldn’t?
conditional benchmark compilation if the default Go compiler is available (determined by the configure script).
shared and static library creation for multiple platforms, thanks to libtool.
enabling extra CFLAGS (and related features) if gcc is new enough to support them
such a thing is fairly easily done in cmake, too
Good luck with using gccgo as a compiler and passing it custom flags. You’ll probably end up writing one or more CMake modules.
I really can’t stand systemd. Absolutely hate it.
Most people wouldn’t accept Emacs with a ton of plugins as the default editor on a Linux system, yet we accept a huge, monolithic, scope and feature creeping sprawling mess as our init in most of the major distros.
Even worse, this is being led by someone who ignores and tries to cover up attempts to improve the spaghetti.
I hate the sheer scope of the thing (why does pid 1 need a DNS server?)
PID1 does not include a dns server, it’s a separate binary.
You are of course, correct. Thanks for pointing that out. I should’ve said, (why does systemd need a DNS server?)
It doesn’t. To the best of my knowledge, systemd works just fine without systemd-resolved. I also wouldn’t characterize systemd-resolved as a “DNS server”, though, so maybe you’re talking about some other component I don’t know of.
I’d characterise systemd-resolved as a DNS server. You can split hairs over whether or not it needs to serve it’s own records to constitute a full DNS server or not, but it listens on port 53 and serves responses to DNS queries.
the problem about this bug is not that it exists, but what it says about the creators of systemd and their behaviour when bugs are found:
a) why gets this parsed as number in the first place? is really only the first byte checked and if it looks numeric then its a number? are there no sane .ini style parsers which can do that, or did the systemd folks just want to write their own?
b) the handling of this bug is an example of lennarts social skills and technical expertise: “works for me, you are dumb”.
and that’s the perspective imho.
It also speaks volumes about the complexity in systemd as shit like this creeps in more often than in certain competition.
It also speaks volumes about the complexity in systemd as shit like this creeps in more often than in certain competition.
It could also just be speaking volumes about the sizes of the install bases for systemd and competing “modern” init systems.
Forcing JS everywhere is arrogance. Accept that native platforms offer a superior UX to web UIs at a fraction of the resource cost. Understand that forcing a document viewing platform to be an app runtime suffers from impedance mismatches that users pick up on. (Shipping a V8 runtime is still hacky).
For all that tech claims to care about execution and UX, they’re awfully wedded to second-best tooling. And for all the talk of “constant learning,” nobody seems to want to move on, instead cramming square pegs further into round holes, wasting engineering effort, and then writing self-congratulatory Medium posts on how they hit 60fps on an i7.
This is not engineering. It is fashion.
it’s not arrogance, it’s a sad but pragmatic concession to the fact that there is no good, productive way to write a cross-platform desktop app with easy packaging and shipping for at least linux, mac and windows. people can simply develop and maintain code a lot more easily in electron than they can in C++/Qt, .NET still has issues on linux, gtk is a pain to package and ship, and somehow java never took off (from personal experience, I gave up on it because swing was painful, but I hear there are better options now).
i have personally settled on ocaml + gtk, which i found nicely productive under linux, but it took me days to get things compiling under windows, mostly because i had to set up a cygwin environment and fight incompatibilities between things compiled against gtk 2.24.30 and things compiled against gtk 2.24.31.
(incidentally, racket is a very productive language for writing desktop apps in; it just needs a lot of work put into optimising the gui platform. i’m keeping a hopeful eye on it.)
This is a good point. There are two issues here:
A language that has equal footing on both macOS and Windows. There are a few that fit the bill here, but it certainly limits your options out of the gate.
A cross-platform UI framework that isn’t terrible. I think they all are, including Qt. (Qt just gets a pass these days because it is less bad than everything else.) Also, chosen language needs decent bindings to the UI framework.
Ideal: common library shared by native apps, but nobody does this.
good, productive way to write a cross-platform desktop app with easy packaging and shipping
It’s going to be controversial but: Java 8 is one, actually. And JavaFX isn’t really bad. And you can package the jre so the user does not have to ship it.
And you can use Kotlin or Ceylon or even Scala if you really don’t like .java.
Can you hot reload UI code without blowing up application state in Java/JavaFX the same way you can with the web? How about inspect and manipulate UI elements in a running program?
Iterative UI development on the web, if you’re careful to disentangle state and operations on that state (React makes this easy!), is very nice.
as an active java developer between 1997 and 2013 I would have said “no, not really”, but since getting into Android development, I have changed my tune completely. well designed code can immediately reload any and all state from storage and a good development environment can ‘hot’ deploy. I’ve extended the notion to some of my Java desktop and server apps and it works just as well. the key is in app design; if your code can tolerate being killed without notice, hot reloads are essentially a freebie!
or clojure :) i started writing a desktop app in clojure several years ago, and only gave it up because swing was too painful, but clojure itself was pretty pleasant to develop gui apps in.
“Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” - Dr. Ian Malcom, Jurassic Park (1993)
Shipping a V8 runtime is still hacky
Is shipping a lua runtime similarly hacky? edit: add a y to hack
I don’t know, but at least Lua is made to be embeddable, whereas JS is embedded because they want to write the app in that.
whereas JS is embedded because they want to write the app in that.
What is V8, if not an embeddable scripting language for Chrome’s version of WebKit?
Say what you will about JS (I hate it) but it’s almost undeniable that we’d be talking about Electron enough to have any debate about it at all if it used something other than JS.
Is it really embeddable? Wasn’t one of the issues of Node that V8 changes API so quick, it is difficult to keep track of it so unless you have a lot of resources, you’re going to get stuck on an old version because API compatibility is not a priority for V8?
Define “embeddable”?
It might not be their goal to provide V8 as an embeddable implementation of JavaScript for other projects, but it’s certainly embedded in Chrome/Chromium, right?
Wasn’t one of the issues of Node that V8 changes API so quick
I have no idea, as I don’t follow this. But, this sounds like a tradeoff that Node has to make if they want to continue using V8, so long as V8 has no interest in providing this sort of compatibility.
Sure enough, only 4 hours later a discussion on trouble with keeping up with V8 happens on the Node project.
Lots of things are stuck inside another thing - Lua is seen as being a particularly embeddable language and runtime because a bunch of choices were made in their design to facilitate embedding them in things. V8 is essentially the opposite. Hence, while it clearly is embedded in various things, it isn’t really a language and runtime combination which is good for that case; so, ‘not really embeddable’.
With @Leonidas comment, I think I (may?) now understand your intention in saying “Shipping a V8 runtime is still hacky.”
Did you mean to imply that because V8 has to be yanked out of Chrome, and the API isn’t stable that it’s hacky?
I’d call it that. Lua, like most embeddable stuff, is designed to easily include in whatever app you want with its existinh interfaces. This will be easy. V8 is embedded in Chrome but maybe not “embeddable” in other stuff easily. Didn’t seem a high priority in its design.
Yup! I was trying to understand if @mattgreenrocks thought shipping any embedded language was hacky, or just V8 / node in particular. But even still, I don’t see how that’s avoidable in Electron’s case, given their goals.
I wouldn’t consider Lua hacky to ship, generally, but there are certainly situations in which shipping a Lua VM would be hacky. It all depends on what the intentions and goals are.
It was targeted more at V8, yeah.
You can usually find ways to package a VM in with an app and remove hassles around needing the VM on the system or certain versions. Desktop apps have big issues around making it easy to package and run everywhere, still.
I may be way off here (I’ve never done an Electron app), but I’m surprised that this thread seems to be going on the assumption (on all sides?) that embedding V8 is really the objectionable part of Electron. I had always assumed it was the embedded web browser (Chromium) that was the culprit in making these apps large, memory-hungry, and “webbish” in their UI conventions, not the embedded JS runtime. I mean, Node CLI apps aren’t necessarily my favorite way to write a CLI app, but they’re not sluggish the way the Slack desktop app is sluggish.
It is, but I think @apg was poking at what part of my “embedding V8 is still hacky” part, not, “what part of Electron is objectionable?”
Re-using the entire browser layout engine is ridiculous, yes.
I had always assumed it was the embedded web browser (Chromium)
I mentally filter most things that look like another JS framework or whatever. Especially if comments start about how resource-hungry it is. This is first one on Electron I actually read where I found out it embeds a whole, web browser. Wow, yeah, easily the most objectionable thing. This tangent covered another important property of how embeddable its main components were in the first place. As in, should they be used at all vs individual libraries? Where that went shows it’s even worse of an idea.
Eh… I don’t think so:
$ cd /tmp/v8; sloccount .
Totals grouped by language (dominant language first):
cpp: 1305265 (97.78%)
python: 27869 (2.09%)
sh: 1147 (0.09%)
ansic: 357 (0.03%)
lisp: 222 (0.02%)
$ cd /tmp/lua; sloccount .
Totals grouped by language (dominant language first):
ansic: 16595 (100.00%)
Lua’s source is a little over 1% of v8’s.
Even LuaJIT which is faster than v8 in many cases is smaller:
$ cd /tmp/luajit; sloccount .
Totals grouped by language (dominant language first):
ansic: 59836 (100.00%)
Lua is the only language I’ve seen that really meets the requirements for being an embedded interpreter (apart from some tiny lisp implementations). Granted, node runs on a lot of IoT devices but I’m pretty sure it’s using more than 256k of flash and more than 64k of ram (unlike elua).
requirements for being an embedded interpreter
What are the official requirements for this?
I don’t think a lot of people care about slocount if it meets their requirements.
Who is forcing anything anywhere? People use the tools they want to use to build their apps. Make the native tools attractive to developers and they’ll build using those tools.
Telling developers who know Javascript and all of a sudden can create desktop apps that they’re being arrogant is in itself arrogant.
I’ve done a good amount of haskell and initially really disliked go, but now I like that I spend less time working on beautiful abstractions and just type the ugly Go and move on.
I really wish there was something like Go with pattern matching/sum types/generics.
ocaml is close, but it seems too dead in terms of ecosystem / community, i think still doesn’t handle parallelism, i think reason does some work on my real but petty complaint that it’s kind of ugly to look at
I started writing a wishlist that basically summed up a GC’d rust but removed it. The friction of threading lifetimes and such is pretty high when the performance of a GC isn’t a problem. Hoping that feeling goes away a bit as I get better with rust.
edit: the list is basically: sum types, generics, parallelism, static binary, GC, green threads, pattern matching, strict, untyped-io, imperative with functional sugar like rust has
I know very little about ML implementations, and I don’t know if there are MLs that have parallelism and green threads otherwise, yeah, I would have suggested ML. Perhaps OCaml fits? Someone familiar can chime in …
OCaml has something similar to green threads, implemented by the Lwt and Async libraries, but not yet parallelism which is underway but it will take time. But these concurrency libraries have tools to shell out computation to subprocesses, so while not terribly pretty, it can be worked around.
What I am quite interested in is the new effect system which puts an interesting spin on side-effects and how they can be handled in a type system.
That was what Oden http://oden-lang.github.io/ was supposed to be. A “statically typed, functional programming language, built for the Go ecosystem.” – basically a functional language that compiles to Go. The guy who was doing that project gave up on it because it was too much work.
I started toying around with creating an Elm to Go compiler in Go, but quickly found that to be too difficult (mostly the static type inference). I think the easiest thing to do would be to take the existing Elm compiler and retarget it to output Go code instead of Javascript. Problem is, I don’t know Haskell (which is what Elm compiler is built in). I’m not sure what’s more difficult – learning Haskell and the Elm compiler or building an Elm to Go compiler from scratch in Go.
I think it’s definitely possible to have Java:(Scala/Clojure) :: Go:X where X has yet to be defined.
Don’t use macros.
Why not? Macros can be very useful. For example, say I have a dispatch table to call function’s with a common signature and set of local variables. If there are 30 different functions, a macro defining the function and declaring the common variables means that if something changes I only have to change it in one place. This is more than just a ease of coding thing: if I change from signed to unsigned or change the width of an integer and forget to change it in one place, there can be serious and hard-to-find consequences.
Don’t use fixed-size buffers.
Always use static, fixed-sized buffers allocated in the bss, if you can get away with it (that is, you know the maximum size at compile time). Allocation can fail at runtime, and adding checks everywhere for this is error-prone. If you’re allocating and freeing chunks of memory at runtime, you run the risk of use-after-free, reference miscounts, etc.
If the size of a block isn’t known until runtime, but is known at startup, allocate the necessary memory at startup and free it at shutdown.
Only as a last resort should you be doing allocation and freeing repeatedly during runtime, when the set of objects and their sizes depends on data only accessible while running.
I feel the writer is not so experienced with C.
Not only generic recommendations like Prefer maintainability (when should we not prefer maintainability?) or Use a disciplined workflow (yes, but what kind of workflow?), some of them are against common C best practices, like: Do not use a typedef to hide a pointer or avoid writing “struct” .
Taking into account opaque pointers are something standard in stdlib and highly recommended to hide complexity and allow code change, I don’t know from where he got these ideas.
Opaque pointers hidden behind typedefs are something I’ve never been totally comfortable with, though I guess I’ve been using them without knowing! Where in libc are they used?
typedef void* lobster_handle_t; is probably the most common way–of which I’m aware–of exposing types and structs for public consumption without giving away internal implementation details to users. This is doubly useful if you have, for example, the same interface implemented differently on different platforms: your _win32.c and _posix.c variants are chosen based on #ifdefs, but user code including your headers only ever sees the opaque pointer.
Forward declaration is the new hotness:
struct T;
void f( T * x ); // feel free to pass around T*, but you don't get to see inside
It brings no benefits to C code because all pointer types implicitly cast to each other, but in C++ they don’t and it’s definitely preferred there.
It brings no benefits to C code because all pointer types implicitly cast to each other
Whoa, no they don’t. void * implicitly converts to any other type of (non-function) pointer, and vice-versa, but that’s it.
(many compilers do allow for function pointer <-> void * conversions, even implicitly, but I think that’s an extension for POSIX compatibility.)
Correct me if I’m wrong, but doesn’t one usually use a FILE * rather than working with a raw FILE?
Sorry I was thinking just “opaque pointer” not one hidden behind a typedef. An example of a completely opaque type (from the perspective of the standard library) is va_list. Extending beyond the C standard library, you have things like pthread_t in POSIX (which could be “the standard library” depending on your definition), which is of unspecified type.
Keep in mind, va_list is not necessarily a pointer, and it’s only opaque in the sense that its contents are undefined and unportable. On x86-64 Linux, for example, it’s a 24 byte struct, and may be defined (depending on your compiler, headers, and phase of moon) as:
struct __va_list_struct {
unsigned int gp_offset;
unsigned int fp_offset;
union {
unsigned int overflow_offset;
char *overflow_arg_area;
};
char *reg_save_area;
};
Right, I was trying to think of an example that is an explicitly opaque type hiding behind a typedef. It’s always interesting to see how POSIX and/or C sometimes mandates somethings as completely undefined by type, but not others. jmp_buf has to be an array type, for example, but is not specified beyond that, and va_list is explicitly of any type at all.
time_t Standard C does not mandate a definition at all (it could be an integer, could be a float, could be a structure). POSIX defines it though.
if I change from signed to unsigned or change the width of an integer and forget to change it in one place, there can be serious and hard-to-find consequences.
Agree, which is why using typedefs to make maximal use of C’s sad type system is a better move than a mere macro. Also, macros can do weird things when expanded in code, and it’s easy to end up with a codebase that is unreadable and ungreppable because of having to continually expand non-intuitive macros. They’re handy, in moderation, but overuse is not so great.
Only as a last resort should you be doing allocation and freeing repeatedly during runtime, when the set of objects and their sizes depends on data only accessible while running.
Spoken like a true Fortran programmer! ;)
More seriously, anything that is actually interactive and of any real practical use is easier coded with dynamic allocation. Also, the number of people that properly write fixed-size allocation code without leaving gigantic security holes and undefined behavior open is small. Better just to use malloc and free and know that you have problems than to hope somebody didn’t mismatch a buffer size with a differently-spec'ed memmove call.
That said, in a library, if you don’t allow users to specify their own allocation routines you are bad and you should feel bad.
~
Overall, I agree that this advice is not so great, probably because the author hasn’t had to deal with producing libraries for others to consume. That very much colors how these things are evaluated.
Fortran
curls up in a ball, rocks back and forth, crying
They’re handy, in moderation, but overuse is not so great.
That’s true of just about anything, but yes, macros are a sharp tool. It’s very easy to hurt yourself if not used very carefully, but like any sharp tool sometimes there’s a good use case. Never say never. :)
More seriously, anything that is actually interactive and of any real practical use is easier coded with dynamic allocation.
True, but not everything need be interactive. The most critical code I work on right now is highly dynamic at runtime, but does no memory allocation after startup. We calculate the sizes of various structures based on parameters provided by the system at startup, and allocate memory once. This is necessary for various reasons, but most importantly because of performance; we deal with tens-of-thousands of work units a second, of varying size. Repeatedly allocating and freeing blocks would rapidly result in fragmentation.
We originally thought about allocating fixed-size blocks, since most modern allocators would handle that well so long as there weren’t any other allocations happening. Things like tcmalloc would still probably be okay, but at the end of the day we decided to use a static allocation scheme with what amounts of a large array with chase pointers in each slot, making allocation an O(1) operation with zero fragmentation (basically a slab allocator). Additionally, we can use mlock to keep those pages in memory to avoid any indeterminacy with swapping.
Variable-sized data is fed into a ring buffer with chase pointers and we keep pointers to things in the ring in the slab-allocated structures; we never copy out of the ring. We track the ring pointers and invalidate any data in a block that gets overwritten while in use (which is surprisingly cheap if you do it right).
(Sorry, that was a big digression, but I really like working on that code.)
Also, the number of people that properly write fixed-size allocation code without leaving gigantic security holes and undefined behavior open is small.
I would argue that writing strncpy(foo, bar, BUFSIZE) is less error-prone than strncpy(foo, bar, dynamically_allocated_size_that_changes). (I admit that’s a contrived example.)
Again, obviously, not everything can work this way. There are times when you have to use dynamic allocation, but, at least in my experience, people have a bigger problem tracking reference counts and avoiding use-after-free than they do dealing with fixed-size buffers.
it’s easy to end up with a codebase that is unreadable and ungreppable because of having to continually expand non-intuitive macros
That’s true, although macros are also sometimes used to fix the problem that C codebases are often hard to grep in the first place. The Linux kernel uses a whole series of WARN macros partly for that reason. Lots easier to grep for WARN_ONCE in a big source tree than have to pore through every inline use of printk.
One loosely related trick I found quite recently is that C++ lets you template on array size, which lets you overload things that take a buffer + size to only take the buffer (can’t do it in plain C), and also lets you implement an ARRAY_COUNT macro that doesn’t compile if you use it on pointers.
template< typename T, size_t N >
char ( &ArrayCountObj( const T ( & )[ N ] ) )[ N ];
#define ARRAY_COUNT( arr ) ( sizeof( ArrayCountObj( arr ) ) )
I believe that declares a function returning an array of N chars? Anyway the error you get is crap but it does work (int * a; ARRAY_COUNT( a )):
main.cc: In function ‘int main()’:
main.cc:5:57: error: no matching function for call to ‘ArrayCountObj(int*&)’
#define ARRAY_COUNT( arr ) ( sizeof( ArrayCountObj( arr ) ) )
^
main.cc:9:9: note: in expansion of macro ‘ARRAY_COUNT’
return ARRAY_COUNT( a );
^~~~~~~~~~~
main.cc:4:9: note: candidate: template<class T, long unsigned int N> char (& ArrayCountObj(const T (&)[N]))[N]
char ( &ArrayCountObj( const T ( & )[ N ] ) )[ N ];
^~~~~~~~~~~~~
main.cc:4:9: note: template argument deduction/substitution failed:
main.cc:5:57: note: mismatched types ‘const T [N]’ and ‘int*’
#define ARRAY_COUNT( arr ) ( sizeof( ArrayCountObj( arr ) ) )
^
main.cc:9:9: note: in expansion of macro ‘ARRAY_COUNT’
return ARRAY_COUNT( a );
^~~~~~~~~~~
Cool! If you can use a C++14 compiler (C++11 is probably good enough, but can’t recall the history of constexpr rules), you can write a constexpr function template which does the same:
template <typename T, size_t N>
constexpr size_t ARRAY_COUNT(const T(&)[N]) {
return N;
}
(ideone)
As for the error message, I wonder if you could do something nasty with SFINAE and static_assert to improve that.
Unfortunately C++ (e.g. template< typename T, size_t N > char ( &ArrayCountObj( const T ( & )[ N ] ) )[ N ];) is unreadable with syntax highlighting too.
Note that with some GCC extensions you can implement an array-count macro that rejects pointers (or any non-array type, really) in C…from the Linux kernel:
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
#define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
Hmm … what browser are you using? Unless there’s a horrible bug, it should add a diaeresis to “preeminent” in the default demo text. If it finds a word in the demo text (which you can edit) that should have a diaeresis, it will add it (so “cooperate” becomes “coöperate”, “zoological” becomes “zoölogical”, and so forth).
You can read more about the diaeresis, and why it’s useful, in the New Yorker — one of the few publications that still uses it.
Hmm, that’s not good. I’m running the latest Firefox (50.0.2) too, on Linux. Are there any errors in the console?
I run Linux, but based on my BrowserStack testing it should work at least on IE 9 through Edge 13 (running on Windows 10).
Are there any errors in the console? Are you running Edge 38? What version of Windows are you running?
Wait, I had to manually toggle it off then back on, and then it adds them. It won’t automatically do so, which is seemingly implied by the text box.
For the default text, or for text you type in the box?
If it doesn’t work on the existing text (preëminent), that’s a bug. If it doesn’t work when you type text into the box until you toggle it, that’s a missing feature: I should make it autorun when the demo textbox looses focus, but I haven’t yet.
If it’s the first, I’m guessing it’s related to how bleeding Edge handles DOMContentLoaded events…
Would it be too much to have it run on every change event issued by the text box? That way you’d get to see it as you type.
By the way, as a fan of the diaeresis: thank you! I hope my suggestion doesn’t require too much reëngineering. :-)
I tried running it on each change, but depending on how much text there is and how fast the user’s typing, it can get expensive. So for now the demo runs Diaeresis when the text box looses focus.
Let me know what you think :)
Google domains has been around for over a year now, hasn’t it?
I guess now they are just using their own fancy TLD (.google) they bought to host it?
icann had a process for companies to buy new gTLDs. As I recall it was something like 180k and required proof of infrastructure to run and manage it. Requirements are outlined in some giant pdfs at: https://newgtlds.icann.org/en/applicants/agb
You can see Google’s application here apparently.
ICANN expanded gTLDs and put them up for public-ish sale a few years ago. Since 2012, the number of gTLDs has exploded. Now we have a whole bunch of topical gTLDs like .plumbing or .coffee, and branded gTLDs like .google or .mormon. You can submit your very own gTLD application for the low low price of $185,000.
https://en.wikipedia.org/wiki/Generic_top-level_domain#New_top-level_domains
The list of TLDs is staggering these days. https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains
The problem is in, however, how those images get produced. Take https://github.com/CentOS/CentOS-Do... for example, from the official CentOS Dockerfile repository. What’s wrong with this? IT’S DOWNLOADING ARBITRARY CODE OVER HTTP!!!
What’s wrong with auditing the Dockerfile? Seems to me Docker is a lot more transparent than other methods. Thoughts?
It’s nice that you can audit them, but they’re all written like this. Docker claims it can be used for reproducible builds, but the first lines in every single Dockerfile are apt-get install a-whole-bunch-of-crap and npm/pip/gem install oh-my-god-thats-a-lot-of-packages. Nobody is actually trying to manage their dependencies or develop self contained codebases, just crossing their fingers and hoping upstream doesn’t break anything.
How is this different from build systems that don’t use Docker? Sure, you might be using Jenkins to build stuff (and have to manage those hosts for the OS-level packages), but the npm/pip/gem/jar, etc., there’s no difference. You still have to manage your dependencies. In my experience, the Docker stuff helps with the OS-level packages (previously we had multiple Jenkins hosts that had the versions of things specific to projects – god help you if you accidentally built your project on the wrong host).
I use maven, where the release plugin enforces that releases only depend on releases, and releases are immutable, which together means that builds are reproducible (unless someone used version ranges, but the culture is to not do that). You can also specify the GPG keys to check signatures against for each dependency. It’s not the default configuration and there’s a bootstrapping problem (you’d better make sure the version of the gpg plugin cached on your Jenkins machine is one that actually checks), but it’s doable.
On personal projects and at work I’ve been putting all the dependencies I use in the source repository. Usually we include the source code, for build tools (premake, clang-format) we add binaries to the repo instead.
There are never any surprises from upstream, and you can build the code on any machine that has git and a C++ compiler.
There’s some friction adding a new library but I don’t think that’s a bad thing. If a dependency is really too difficult to integrate with our build system then the code is probably going to be difficult too. If we need to do something easy people will write it themselves.
At the risk of stating the obvious: if you audit the Dockerfile and it says “hey we downloaded this thing over HTTP and never checked the signature” there’s no way to tell if you got MITMed.
Okay, so then you use another Dockerfile (or write your own). This is a very strange tack to take; you may as well say that Rust is an insecure programming languages because with a few lines of code you can create a trivial RCE vulnerability (open listener socket, accept connection, read line, spawn a shell command).
For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked! And when installing via apt isn’t an option, Docker doesn’t keep you from doing the right thing (download over https, check signatures). You’re just running shell commands, after all.
My point exactly. There’s nothing wrong with taking an existing Dockerfile that you find to be suspect, beefing it up by correcting some obvious security issues, and resubmitting it as a patch.
I fail to see what the author of the article thinks is a better alternative. I’m open to be convinced otherwise, but saying it’s actively harmful seems overstated.
For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked!
OK, so the signatures are checked. You still don’t know what version you got.
Then pin the damned versions (apt-get install <pkg>=<version>), point at snapshot repos, and upgrade deliberately. This problem is totally orthogonal to Docker. All typical package repos suffer from it. I only know of Nix that doesn’t.
This is why companies that care host their own registry for Docker images, just like they’ve done for Java, Python, Ruby, etc., for years. It is unfortunate that Docker didn’t design the registry system to be easily proxied, but this is easily worked around with current registry tools (Artifactory, for one).
It’s possible I’m not reading this charitably enough, or that I’m utterly misinformed, but:
I’ve come to the conclusion that Docker is actively harmful to organizations. Not the underlying technology…I think LXC is fantastic as are cgroups.
As I understand it, anyway, LXC isn’t quite an underlying technology of Docker. LXC is, like Docker, a collection of user-space utilities that interact with the Linux kernel’s “container” features (namespaces, cgroups) and fancy container-friendly filesystems like btrfs, ZFS, or OverlayFS[1]. Docker and LXC provide roughly the same level of abstraction.
This is to say nothing about the Docker project itself, which seems to think LTS is for suckers and everything should be bleeding edge. Bleeding edge FS. Bleeding edge Networking. Glibc? Fuck GNU as a staff, open source organization, and as a fucking crew. Let’s switch to Musl.
I’m on board with the wariness at Docker’s reliance on bleeding edge technologies (it was an enormous pain to run Docker on Ubuntu 12.04, which we are finally moving away from at work), but as far as I know nothing in Docker itself relies on musl. Some folks who build images might prefer to base them on musl-using distributions like Alpine. This jab hardly seems fair to aim at the Docker project itself, though.
(1): I believe that Docker was originally a wrapper of lxc, which might be why this misconception persists.
This is largely true, based on my experience. Good advice to follow.
I think the author is a bit harsh on Markdown, especially if you read the linked essay. It was designed for writing for the web, and was meant to be converted to HTML only. The author criticizes it for lacking things it was never meant to have.
But I do agree that it’s terrible for technical documentation and it’s a travesty that it’s become the norm in that respect. I recently switched to using Markdown for some internal documentation at work mostly because Bitbucket will render it and it (hopefully) encourages others on the team to write docs. When I’m writing with it, though, it’s a mess. No description lists, linking between things is ugly, and you have to manually make thing like a table of contents. I’d switch to mdoc but then I’m worried no one else will bother to writing anything. Maybe that will happen anyway with the cruddiness of Markdown, so perhaps it won’t matter.
(Bonus: in the linked essay, I learned that GNU is apparently going to kill off info. No arguments here!)
I believe we mostly have GitHub becoming the norm to thank for that. Its native formatting for rich presentation is Markdown via the
README.md. With GitHub being massively adopted and Markdown being the least path of resistance, it’s hardly surprising how this came to be. So that hampers mdoc adoption.Because everybody writes their documentation in
README.mdnow (if at all), they also expect Markdown to man page converters. Those emit man(7) more or less by necessity. People unfamiliar with mandoc won’t care, but those that are may be annoyed by the semantic information that is lost. Not only because mandoc produces worse HTML output because of it, but also because mandoc’s semantic search won’t work for those man pages. However, the group that are unfamiliar with mandoc intersects more or less entirely with the group of people who writes Markdown exclusively. This further hampers mdoc adoption.We have a man page and documentation problem. And there doesn’t seem to be a way to help it.
You’ve long been able to render AsciiDoc, org, rST, among other lightweight markup languages, to HTML with GitHub. For example:
I dunno why Markdown “won”, but maybe it had something to do with:
Looks like that was from 2014 and I can’t tell if the proposed replacement is genuine or a very bad joke. Either way, it doesn’t feel like much has changed in 4 years unfortunately.
This is a specific solution (VSCode specific), but I discovered markdown-toc recently and I like it. There are other tools that can be used to add a TOC to a markdown based on headings.
I actually find Markdown convenient for Readmes and user land documentation. Because I can get by with very little markup and the markup reads fine as text it encourages me to write.
I can totally see how it would be a pain for technical documentation for code bases, but I suspect you would use .rst and an assist from an automatic documentation generator for that.
I am regretting the Markdown choice already and I’ll probably switch to something else that can export Markdown so it can be read directly while browsing the repo.
So yeah, whether that’s .rst or not remains to be seen, but the pattern is the same.