See “boolean blindness”: https://existentialtype.wordpress.com/2011/03/15/boolean-blindness/
The Qt API design principles, which are a great document, also discuss this: https://wiki.qt.io/API_Design_Principles#The_Boolean_Parameter_Trap
In the Qt principles discussion it says they have moved from using bare boolean positional arguments to enum-based arguments. I agree that the former is hard to read, because it is not self-describing.
The original post, however, seems to be talking about named arguments which, though using a boolean type are self-describing. That is, you see not just true, but preview: true which is much clearer.
Also try http://lcamtuf.coredump.cx/oldtcp/tcpseq.html on for size.
Man, that’s pretty.
I know non-crypto PRNGs have their place, and had even more of one back when crypto operations were slower, but I wonder if these days more apps should be defaulting to a PRNG based on a cipher design, like ChaCha20/8 or counter-mode AES where it’s hardware-accelerated. We know those functions really well from cryptographers’ study of them, and they’re pretty fast now, GBs/s not MBs/s.
It’s sort of like how a lot of environments use keyed hashes even for hashtables, like SipHash and Go’s aeshash. It’s always possible to switch to PCG or something if you need it, but if you don’t, might as well go with something we know is unpredictable.
This article is well-intentioned, but there are a couple problems with it as a summary of what we should fix in the open-source ecosystem:
First, the factual inaccuracies:
Linux drivers are usually much worse (they require a lot of tinkering, i.e. manual configuration) than Windows/Mac OS drivers in regard to support of non-standard display resolutions, very high (a.k.a. HiDPI) display resolutions or custom refresh rates.
I’m not able to discern what real phenomenon this is describing. Drivers have never been where users configure display modes, and Xorg.conf is a thing of the past. For any case where a display doesn’t work right with Linux (and it is the kernel’s responsibility since kernel modesetting was adopted the better part of a decade ago), a bug should definitely be filed on kernel.org (and linked in this article).
No reliable sound system, no reliable unified software audio mixing (implemented in all modern OSes except Linux), many old or/and proprietary applications still open audio output exclusively causing major user problems and headache.
ALSA has software downmixing (dmix) and all other Linux audio systems live atop ALSA. We can’t fix proprietary applications, but you can virtualize their access to the sound hardware.
Wayland works through rasterization of pixels which brings about two very bad critical problems which will never be solved: Firstly, forget about performance/bandwidth efficient RDP protocol (it’s already implemented but it works by sending the updates of large chunks of the screen, i.e. a lot like old highly inefficient VNC), forget about OpenGL pass through, forget about raw compressed video passthrough. In case you’re interested all these features work in Microsoft’s RDP. Secondly, forget about proper output rotation/scaling/ratio change.
So does X11, and so do the Windows and Android display systems. I don’t know the precise interactions between macOS applications and the display server, but usage of “Display PostScript” might imply OS X does pass vector images to the display server. However, all of this is immaterial: Wayland is an IPC protocol, and can easily be amended to support vector graphics if there ever appears a reason to do so: new buffer types can be negotiated for contents like vector graphics or compressed video. For now, no clients would use it and it would not improve performance or appearance. All graphically intensive X11 programs operate under a raster paradigm and do not use X11 drawing primitives.
OpenGL passthrough is a very specialized use-case, and much like remote Wayland in general, can be done as long as there’s coordination on both sides. Saying that Wayland makes this impossible is disingenuous. Pixel-based remoting works well and is still what happens in RDP for much of screen contents (e.g. FF or Chrome over RDP); in fact, the reference Wayland compositor can be accessed over a free-software implementation of Microsoft’s RDP.
Then there’s the fact that many of these are coordination problems. We can’t fix “there are too many distributions” or “[insert proprietary software] does not play nice” or “no unified interface to [thing]”. We can and should fix specific problems with a given distribution or a given interface to functionality. But fundamentally in open-source work you can only do your best to provide working code; it isn’t possible to forcibly remove the broken stuff in other people’s repositories and distributions. “Different programs do things differently” is not a problem we can fix. On the other hand, “there is no way for programs X and Y to share code” is a problem we can and should fix, and it will alleviate many of the problems described here. We should be finding ways to share more code between GNOME and KDE; GTK, EFL, and Qt; et cetera.
Most of the criticisms of the Linux kernel are very valid and point to failings in the design of UNIX: resources should be revocable, and kernel subsystems should have some isolation. That said, many of the problems listed here are also complaints about work that simply hasn’t been done due to lack of hardware documentation or developer power.
In general this article spends way too much time and effort complaining that the Linux ecosystem is not a product, and as such fails badly at things products do such as provide customer service and backwards compatibility guarantees. A multitude of outstanding bugs is worse than having all those bugs closed, but ultimately fixing bugs requires lots of work, and not many people can afford to work on Linux. It isn’t productive to respond to the work of those who can by saying, as quoted, “Fuck it! This ‘OS’ is a fucking joke.”
many of the problems listed here are also complaints about work that simply hasn’t been done due to lack of hardware documentation or developer power
The fact that there are good reasons for the failure does not solve the issue.
It might suggest that “building a free desktop OS” is not a successful model and we might as well give up. However, I would also ask if “building a desktop OS” in general is a profitable model. Apple does not make its profit by selling OSX or iOS. Even Microsoft does not make much profit from Windows anymore.
tl;dr: memory leaks, space leaks, and fragmentation
The third type from the article is memory fragmentation between a VM and an OS due to lazy freeing of unused memory by the VM. This is quite different from the common meaning of fragmentation which describes having free memory fragments across non-continuous blocks of physical memory.