I imagine that this has about the same amount of restrictions as docker on macos has, with regards to networking, since it’s basically running the docker inside a minimal linux vm.
Okay, so I’m not sure what I’m doing here, but I just ran a couple of tests, and my sample directory is, I kid you not, full of pictures of Rick Astley.
Rick Astley is inserted when you a) generate an image that is detected as NSFW or b) run out of VRAM, I believe? cf. https://github.com/knsg16/stable-diffusion#i-just-got-rickrolled-do-i-have-a-virus
I had briefly falled down a similar rabbit hole a while back with one of those Planck ortho-linear keyboards, but I returned to a regular keyboard layout for the following reasons:
But if it works for the OP, I’m all for it. Ergonomics are super important in this line of work.
I have the Planck EZ and more or less the same problems, but about 4 months in:
-I have made custom layouts for games -I sometimes work away from my home office and at those times I do carry the Plack EZ along with all the wiring -In extremis I can always fall back to the computer’s builtin keyboard and it’s not all that jarring (for me, anyway).
Also 4 months in I have observed that I am using the keyboard wrong and/or the columnar layout is not helping me much. My fingers travel a lot anyway. I think I must be using it “wrong”
Yeah, it’s not insurmountable, but I think I underplayed how much I play video games, and it’s not feasible to program a new layer every time you find a new game with slightly different keybindings, I find.
As much as I like the whole QMK open firmware project (and it’s related projects), it’s not exactly a rapid process to change things around.
True. I don’t play a whole lot on the computer, I’m more of a console person (and will often prefer a controller even on computer). When I do play on the computer, games tend to have similar bindings by “genre”, more or less. If I wanted to play more using my Planck, I probably would have layers by genres.
i have been alternating between a staggered-qwerty (laptop keyboard) and ortho-colemak (the ferricy), and i am comfortable with both now! i have been able to consistently hit ~90 WPM on both layouts, it takes me a few minutes of typing to “switch” my brain over from one to the other.
Nice overview. I’ve been rocking 42 keys for nearly a decade now and I’d never go back, but I really only have one layer I use regularly.
One thing I’m curious about that the article didn’t mention: how long did it take you to get proficiency in this layout? (For me it took about 3 weeks to get fast on the Ergodox, and once I had that proficiency, bringing it down to 42 keys on the Atreus only took 2 weeks, but from what I hear about other people switching to the Atreus, 3-4 weeks is common.)
glad you liked it. big fan of the unibody-split design of the atreus.
the descent to 34 was gradual. i started out with a Lotus58, plucked out a few keys until i got to 48, 36 and finally 34. All in all, it took me around 3 months to go from a 60% to a 35%. That being said, I am not as fast as I was on staggered-qwerty yet. I am currently hovering at about 90 WPM on the ferricy, whereas I could hit upwards of 130 on qwerty. going from 36 to 34 was particularly tricky, every key is load-bearing at that point.
I’ve been using non-standard layouts for 15+ years, and a mix of ortholinear and normal staggered keyboards for 5+ years. I can switch layouts mid-sentence and go between staggered and ortho layouts in a breeze as well (the only awkward part is to have two keebs on the same table), the entire typing should be in your muscle memory and not in your head. It can be done, without issue.
And keyboards like these have nearly nothing to do with ergonomics. Keyboards are awkward and stupid to use for humans. :)
And keyboards like these have nearly nothing to do with ergonomics. Keyboards are awkward and stupid to use for humans. :)
Do you think any writing/typing implement is ergonomic?
The closest are probably Maltron, Kinesis Advantage, Dactyl and friends. And while I love using dvorak I don’t have any delusions that my layout of choice would improve ergonomics in any way (beyond placebo, which is powerful in itself).
I switched to ortholinear about 4-ish months ago. I swap back to a standard TKL board for gaming, though that’s partially because I have a tented split keyboard. At first I had a little trouble switching back and forth between ortholinear and staggered, but after the first maybe 2 weeks I don’t have much trouble switching back and forth.
Ergonomics are super important in this line of work.
Agreed. And it’s great that there are so many keyboard options because it seems everyone needs something different. I love the Planck, despite its flaws. After trying a few different styles I settled on the Planck because I have small hands and the less distance my fingers have to travel the better.
I was last issued a business card 15 years ago. I get the feeling they’re about as relevant as fax machines nowadays.
Depends on field and cultural/country context you are working in. Working mostly with foreign ministries in public health research and cooperation. Business card are often a means to be remembered, show off some credentials (mostly by giving your boss business card to the highest level to show how to connect with higher levels).
The last two jobs I’ve had did not have business cards as part of my onboarding package, but I’ve actually been in a handful of situations lately where I needed a business card. Truthfully, it was only because another party had a business card and was expecting me to have one too.
They are pretty useful for in-person socialization/networking events. And basically useless outside of that. I’ve got like three boxes of the damn things lying around after three different business/contracting endeavors.
I’m starting up the work building the network infrastructure for the largest music festival in Northern Europe. It’s great fun, and we’re literally building a medium-sized service provider network in a week.
My guess is that the mypy developers reasonably thought that introducing such a major feature needs to be done gradually. If you see 100 errors when you run mypy
the first time, that could be fixable in an afternoon. If you see 10,000, you just give up and stop trying.
To get to a good place, start with the default mypy
configuration, then work your way up to strict compliance (see mypy --help
and apply each of the --strict
flags separately), and finally just use mypy --strict
, which fixes most of the issues mentioned in this post. Or change to a compiled language.
Yeah, this is the way.
I can’t understand why the author (a) wants to use Python, a dynamically typed language (b) wants to use type checking but (c) will give up on type checking that isn’t 100%-guaranteed-enforced
mypy
to your CI. Congrats, now it’s enforced. There goes whole sections of this post.PyObject*
but shhhh). Its existence doesn’t mean that things, in your own codebase, that are hinted aren’t helpful! Or at least it’s very strange to argue that “anything could be Any, and Any is bad, so let’s not use hints at all, so now everything is Any!”I have never seen a statically typed language that didn’t have optional dynamic typing. Java has Object, Haskell has Dynamic (and a few other options), C has void*. Any is hardly news to the space
Add mypy to your CI. Congrats, now it’s enforced. There goes whole sections of this post.
This is the way.
We did this early on and it has done wonders for code quality.
I don’t know fish very well but, pardon my asking, why?
There’s a commonly held belief that one should at least think long and hard about using a more established programming language rather than trying to code in the shell. This feels like an even longer slide down a slippery slope.
Or am I missing something about fish shell’s magical-ness?
Yeah, the only reason anyone ever writes scripts in shell is that it’s guaranteed to already be installed, which isn’t true of fish.
Plus the I in fish stands for “interactive”; like … that’s very clearly what it’s designed for.
[T]he only reason anyone ever writes scripts in shell is that it’s guaranteed to already be installed
This right here.
Fish is very much something I would install on my personal laptop, but on the environments where I run a lot of shell scripts, it’s either bash or a higher level language like Python depending on the complexity.
Personally another reason I use shell scripts is cuz there are nice primitives. Every machine I use has Python 3 installed, but subprocess.Popen
isn’t the funnest interface.
I would love to have a shell script language that is fun to use and isn’t Powershell
Yes, process management in your average language sucks compared to bash. That’s why you can find libraries for $YOUR_LANGUAGE that do it significantly better than the standard library of that language. In your case, google for “python shell scripting library”.
If you are like me and believe that if something is too frequent it should get its own syntax and in general prefer a language built ground up for this type of scripting, you are welcome to try Next Generation Shell (I’m the author).
Yeah, the only reason anyone ever writes scripts in shell is that it’s guaranteed to already be installed, which isn’t true of fish.
I don’t really understand what the problem is here. OK I do not work with servers in my daily job but I’m pretty sure that most people do install some software first on those machines. Why not add fish to the list of web servers/JS packages/Python packages/etc…? I mean who runs their system bone stock (except on OpenBSD ;) ) so what’s the problem with installing fish?
There is a spectrum, but with four important points:
bash
are in this list.Each step in this list adds some potential friction and so should come with some benefit. There are some nice bash
isms that make shell scripting easier and bash
is pretty easy to install on any system that has a POSIX shell, so it’s often worth using !#/usr/bin/env bash
instead of !#/bin/sh
for your shell scripts. Using a real programming language such as Lua has some bigger benefits but means that users probably need to install it.
Something like fish or zsh is in an interesting place because it’s as much effort as installing a programming language such as Lua or Go (easier than one like Python or Ruby), but generally the uplift in functionality relative to POSIX shell or bash
are fairly small.
If your scripts are really scripts (i.e. mostly running other programs) then a shell scripting language may be a better fit than a programming language, but now you need to think about how much easier fish
is than POSIX shell or bash
and whether the benefit outweighs the additional friction for users.
Note that the friction also implies to security audits. If someone is deploying your software in a container and it depends at run time on a scripting language, then that language implementation is now part of your attack surface. Things like Shellshock showed that bash
isn’t always a great choice here, but it’s probably already in the attack surface for other things and so will be on the list of evaluated risks already. The same is true of something popular like Lua or Python (though that may not be a good thing as it may already be flagged as high risk). Something like fish
will likely trigger extra compliance processes.
It’s not that there’s a problem with installing fish; it’s that if you’re going to bother with having prerequisites, you’ve just discarded the one advantage shell programming has over regular programming languages.
Bit of a chicken and egg problem though, you have to install those interpreters and what would you install it with?
At some point you’re interacting with the OS primitives. The most elegant thing at that level is sh
so you’re already using it.
Usually those are the same thing.
Ultimately there will always be some shell running. The only way to get around that is ansible (and some kind of priming system) or — well arguably it can get a lot more more complicated to do very simple things.
Maybe someone should make a Linux distro with fish under the hood?
At some point you’re interacting with the OS primitives. The most elegant thing at that level is sh so you’re already using it.
Huh? In POSIX systems, the system
libc function invokes a shell, but [v,pd]fork
/ clone
+ execve
doesn’t go anywhere near the shell and these expose things that the shell does not. Any higher-level language that exposes the process-creation system calls directly will give you more control than the shell and will not depend on the shell for anything.
Unlike VMS or DOS, the shell is not in any way special on a *NIX system, it’s just a program that talks to a terminal (or something else) and does some system calls. This is why things like fish
can exist: the program that you use for sitting between you and the operating system’s terminal facilities can be anything that you want.
I wouldn’t say fish is magical, it just seems like a small, sane shell language with fewer thermo-nuclear footguns. On paper it should be an ideal candidate for project automation and general scripting.
EDIT: I guess my point is that we often treat perl/ruby/python as “a better bash”, while it seems to me that fish should be a candidate for the same role.
From a strictly technical standpoint I can see where you’re coming from, but think about this in terms of commercial viability.
You do what you’re suggesting and create a pile of snowflake Fish shell code to run the company.
You then leave a year later for greener pa$$tures, and the next person inherits a technically superior codebase which sadly resolves into a perfect black hole of sadness and despair because NO-ONE anywhere uses this tool this way and NewPerson is totally, UTTERLY alone.
Yes. Easier to use correctly. If bash is a quoting hell where the right thing to do – using arrays – is awkward, fish trivializes both: Compare "${array[@]}"
(bash) with $array
(fish): Every variable is an array and you don’t need to quote it.
Being not POSIX compatible, fish shell scripts and bash scripts are not compatible.
This was a deliberate choice to be better for interactive use, which makes fish an immature and non appropriate choice for a shebang.
It’s always seemed a bit insane to me that you needed to have this additional layer of manpower for maintaining packages in addition to the manpower required to just keep many of these open source projects afloat.
Maintaining packages for a single distribution seems like an enormous overhead, and it seems to me that we are seeing the consequences of this. I would be really surprised if this was a problem with debian alone.
That said, I do understand why it has to be like this, but it is clearly the highest core operational cost of running a linux distribution.
Debian policies do themselves no favors in maintenance. Take go applications for instance. Every go library must be packaged and maintained separately. This means potentially dozens or hundreds of dependency packages for a go application to receive its own package.
Never mind that Debian has one of the most complex packaging standards of all the distributions. Adding to that complex standard there are probably a dozen different toolsets to meet the standard.
I think when Linux was much smaller than it is now this was seen as feasible. You “only” needed to maintain the relationship between kernel, libc, and some utilities (that were handled by GNU, mostly). Plus it was a hobby OS mostly run on desktops.
Fast-forward a few years and your Linux server is a mission-critical, high-value target, but you still rely on the same volunteer organization to ensure it’s up-to-date.
Well, linux as production-critical OS isn’t really very new to be fair. OP claims to have run debian since 1998, and that’s 24 years ago (sorry…).
Just because I spend 15 minutes procrastinating with this, here’s the growth of packages within debian historically:
| Release | Amount of amd64 packages in main | |
|-------------------------+----------------------------------+--------------|
| squeeze (2011) | 16503 | |
| wheezy (2013) | 20041 | WWW |
| jessie (2015) | 23624 | WWWWWW. |
| stretch (2017) | 26240 | WWWWWWWW; |
| buster (2019) | 28489 | WWWWWWWWWW: |
| bullseye (2021) | 29940 | WWWWWWWWWWWc |
| bookworm (next release) | 30624 | WWWWWWWWWWWW |
Fun fact: Debian provides direct access to a postgresql with a lot of statistics about everything debian related. I think that is kind of neat.
Nice graphics, but a bit disingenious because you probably should’ve used max(amd64, i386)
, especially for 2011 ;)
Yeah, it’s not meant to be disingenious, rather just sloppily made :)
I think the main takeaway, if you disregard 2011 specifically is that the amount of packages grows with 1000-2000 ever release, so it’s not like the growth is in any way explosive, but rather it is possible for the debian team to anticipate the ballpark level of packages in each subsequent release.
Yup. I think it’s fine. I only picked 2011 at random because I don’t actually remember when I transitioned my stuff to amd64. 10-12y sounds about right…
Also, a lot of software was probably a lot simpler back then and development perhaps even moved at a slower pace because open source was such a niche thing (like you said, hobby OS), so it was both more doable to keep packages up to date and also less of an issue if packages lagged behind.
Was this actually meant to be published?
I mean, it’s really an angry rant, and I don’t agree with most of the posts. The argument that you have to learn vi before you can appreciate neovim, is dumb. I’m sure it’s handy if you are often faced with environments with only vi, but a lot of users use neovim on their desktop and probably nowhere else ever, I’m sure. I am also not forcing new Linux user to start with Linux from Scratch, so they can appreciate all the convenience their favorite distribution has added.
Authors of books often use ancient word processors, like WordStar or WordPerfect, because that’s what they’ve already spent the time learning. Sitting in front of their vintage computers using vintage word processors, they don’t have to think about their tools, and can focus 100% on actually writing (not that this seems to work very well for George R.R. Martin…)
Programming is much the same. If you’re most comfortable using a totally out of fashion editor, because that’s what you learned ages ago, don’t let anyone try to convince you to use anything else.
I was a vim user, then became an emacs user with evil mode, and I learned the ropes before I became a full time programmer. My colleagues are largely using vscode, but I don’t see any reason to switch, now that my setup is working great for me.
I still use vim for two reasons:
The second is very much the sunk-cost fallacy. If I switched to an editor that could improve my productivity long term, then the fact that I’d probably have a few months of re-training my fingers is a relatively small cost. I’d happily switch to something better if it worked everywhere. The lack of an iOS version of something like VS Code makes me a bit nervous because most of the time I’m sitting in front of a physical computer, it’s running Windows, macOS, Android, or iOS and if I can’t install the editor on all of them then there’s a danger of needing to switch between them based on the device I’m using, but it doesn’t worry me too much because most of the time I’d want to use an iPad it would be for remote development and the browser-based version of VS Code is fine there. The limitations of the remote extension worry me a lot more because if it can’t even support FreeBSD (which, let’s be honest, is basically the same as Linux or macOS to the extent that a toolchain should care about) then there’s no chance that it will support whatever the next big thing is until long after it’s big.
The more experienced I become, the more I value tool chains that are available everywhere and simple
Git pull ; vim ; make ; run
It is great to be able to deploy a simple fix this way
I feel make along with vim are unsung heros
I’d substitute cmake for make there. CMake might not be installed out of the box everywhere, but getting a CMake project to build on Windows is far less painful than trying to get make to work. POSIX Make is completely useless and so any non-trivial project tends to use vendor extensions. The bmake and gmake extensions are incompatible (Solaris Make, last I checked, did little beyond POSIX), so on any non-GNU platform you’re likely to need to install gmake as an optional extra anyway and at that point I’d rather have CMake and Ninja.
No doubt Martin uses whatever he likes when writing, but at least in the rest of the publishing chain Word is ubiquitous:
http://www.antipope.org/charlie/blog-static/2013/11/cmap-why-do-you-use-microsoft-.html
(Although I’m pretty user Martin has an actual staff while Stross is still very much a one-man show)
For what it’s worth, I wrote all four of my books in vim and typeset them using LaTeX. The publisher required me to send camera-ready PDFs (thank you crop package) and to match their in-house style as closely as possible (the only difficult bit of this was the layout of the copyright page) but then just skipped a load of the steps. Oh, and the really nice thing about doing it this way is that you give the copy editors and proof readers an immutable format so that they can’t make changes, they can just request changes and I have to review each one and apply it, so when they make a ‘correction’ that incorrectly changes the meaning I can ignore it.
They didn’t have a good process for going from PDF to HTML and so completely messed up the formatting of the ePub editions of the first one. After that, I restricted myself completely to semantic markup in the TeX and wrote a small tool that dumped it as XHTML with styles reflecting the semantics so that they could style it however they wanted (I used libclang to parse my examples, so the XHTML included things like class-name, local-variable, argument, and so on).
Charlie does all the writing himself. I know he’s tried out Scrivener and other tools on occasion. There’s a small group of volunteers who moderate his blog and provide technical support for various things, and a larger but still quite small group of beta-test readers.
If you’re most comfortable using a totally out of fashion editor, because that’s what you learned ages ago, don’t let anyone try to convince you to use anything else.
I don’t believe for many the reason to stick with a simpler editor is just due to convenience, or because they don’t want to learn new things.
For example I’ve started with the Turbo Pascal IDE, then switched to Visual Basic and Visual Studio C++ (both before .NET existed), then I’ve used Eclipse quite extensively for Java, and for some period even used the JetBrains IDE variant for Ruby, but I found that fiddling with them on each release, on each update, trying to navigate through the countless windows, pop-ups, menus, auto-complete, etc., took more time than what they were worth… (In fact from the early days of using Visual Studio I’ve disabled syntax highlighting and auto-complete, as I found them more distracting than useful.)
Thus I believe that for many the simplicity plays a key role. (At least for me it does.)
Appropriate, since it’s written in bash. Do I need to tell what that means in Norwegian?
(hint: I wrote a sort of bash cleaner program that I couldn’t call that.)
There are some really interesting points in this article, and the re-design of emacs is something I remember seeing on the /r/emacs sub-reddit a while ago. It looks really good!
I’ve yet to try the design out myself, but some day I will.
I have never understood why KDE isn’t the default VM for any serious linux distribution. It feels so much more professional than anything else.
Every time I see it, it makes me want to run Linux on the desktop again.
I suspect because:
Regarding the second reason: Plasma overall looks pretty nice, at least at first glance. Once you start using it, you’ll notice a lot of UI inconsistencies (misaligned UI elements, having to go through 15 layers of settings, unclear icons, applications using radically different styles, etc) and rather lackluster KDE first-party applications. Gnome takes a radically different approach, and having used both (and using Gnome currently), I prefer Gnome precisely because of its consistency.
There’s also a lot of politics involved. Most of the Linux desktop ecosystem is still driven by RedHat and they employ a lot of FSF evangalists. GNOME had GNU in its name and was originally created because of the FSF’s objections to Qt (prior to its license change) and that led to Red Hat preferring it.
Plus GNOME and all its core components are truly community FLOSS projects, whereas Qt is a corporate, for-profit project which the Qt company happens to also provide as open source (but where you’re seriously railroaded into buying their ridiculously expensive licenses if you try to do anything serious with it or need stable releases).
No one ever talks about cinnamon mint but I really like it. It looks exactly like all the screenshots in the article. Some of the customisation is maybe a little less convenient but I have always managed to get things looking exactly how I want them to and I am hardly a linux power user (recent windows refugee). Given that it seems the majority of arguments for plasma are that it is more user friendly and easier to customise, I would be interested to hear people’s opinions on cinnamon vs plasma. I had mobile plasma on my pinephone for a day or two but it was too glitchy and I ended up switching to Mobian. This is not a criticism of plasma, rather an admission that I have not really used it and have no first hand knowledge.
I have not used either in anger but there’s also a C/C++ split with GTK vs Qt-based things. C is a truly horrible language for application development. Modern C++ is a mediocre language for application development. Both have some support for higher-level languages (GTK is used by Mono, for example, and GNOME also has Vala) but both are losing out to things like Electron that give you JavaScript / TypeScript environments and neither has anything like the developer base of iOS (Objective-C/Swift) or Android (Java/Kotlin).
As an unrelated sidenote, C is also a decent binding language, which matters when you are trying to use one of those frameworks from a language that is not C/C++. I wish Qt had a well-maintained C interface.
I don’t really agree there. C is an adequate binding language if you are writing something like an image decoder, where your interface is expressed as functions that take buffers. It’s pretty terrible for something with a rich interface that needs to pass complex types across the boundary, which is the case for GUI toolkits.
For example, consider something like ICU’s UText
interface, for exposing character storage representations for things like regex matching. It is a C interface that defines a structure that you must create with a bunch of callback functions defined as function pointers. One of the functions is required to set up a pointer in the struct to contain the next set of characters, either by copying from your internal representation into a static buffer in the structure or providing a pointer and setting the length to allow direct access to a contiguous run of characters in your internal representation. Automatically bridging this from a higher-level language is incredibly hard.
Or consider any of the delegate interfaces in OpenStep, which in C would be a void*
and a struct containing a load of function pointers. Bridging this with a type-safe language is probably possible to do automatically but it loses type safety at the interfaces.
C interfaces don’t contain anything at the source level to describe memory ownership. If a function takes a char*
, is that a pointer to a C string, or a pointer to a buffer whose length is specified elsewhere? Is the callee responsible for freeing it or the caller? With C++, smart pointers can convey this information and so binding generators can use it. Something like SWIG or Sol3 can get the ownership semantics right with no additional information.
Objective-C is a much better language for transparent bridging. Python, Ruby, and even Rust can transparently consume Objective-C APIs because it provides a single memory ownership model (everything is reference counted) and rich introspection functionality.
Fair enough. I haven’t really been looking at Objective-C headers as a binding source. I agree that C’s interface is anemic. I was thinking more from an ABI perspective, ie. C++ interfaces tend to be more reliant on inlining, or have weird things like exceptions, as well as being totally compiler dependent. Note how for instance SWIG still generates a C interface with autogenerated glue. Also the full abi is defined in like 15 pages. So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started. Maybe Obj-C strikes a balance there, I haven’t really looked into it much. Can you call Obj-C from C? If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.
Also the full abi is defined in like 15
That’s a blessing and a curse. It’s also an exaggeration, the SysV x86-64 psABI is 68 pages. On x86-32 there are subtle differences in calling convention between Linux, FreeBSD, and macOS, for example, and Windows is completely different. Bitfields are implementation dependent and so you need to either avoid them or understand what the target compiler does. All of this adds up to embedding a lot of a C compiler in your other language, or just generating C and delegating to the C compiler.
Even ignoring all of that, the fact that the ABI is so small is a problem because it means that the ABI doesn’t fully specify everything. Yes, I can look at a C function definition and know from reading a 68-page doc how to lower the arguments for x86-64 but I don’t know anything about who owns the pointers. Subtyping relationships are not exposed.
To give a trivial example from POSIX, the connect
function takes three arguments: int
, const struct sockaddr
, and socklen_t
. Nothing in this tells me:
sockaddr
structure, it is a pointer to some other structure that starts with the same fields as the sockaddr
.const
and you’d be right most of the time).I need to know all of these things to be able to bridge from another language. The C header tells me none of these.
Apple worked around a lot of these problems with CoreFoundation by adding annotations that basically expose the Objective-C object and ownership model into C. Both Microsoft and Apple worked around it for their core libraries by providing IDL files (in completely different formats) that describe their interfaces.
So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started
You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.
In contrast, embedding something like clang’s libraries is sufficient for bridging a modern C++ or Objective-C codebase because all of the information that you need is present in the header files.
Can you call Obj-C from C?
Yes. Objective-C methods are invoked by calling objc_msgSend
with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name. Many years ago, I wrote a trivial libClang tool that took an Objective-C header and emitted a C header that exposed all of the methods as static inline
functions. I can’t remember what I did with it but it was on the order of 100 lines of code, so rewriting it would be pretty trivial.
If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.
There are fewer C programmers than C++ programmers these days. This is one of the problems that projects like Linux and FreeBSD have attracting new talent: the intersection between good programmers and people who choose C over C++ is rapidly shrinking and includes very few people under the age of 35.
LLVM has llvm-c for two reasons. The most important one is that it’s a stable ABI. LLVM does not have a policy of providing a stable ABI for any of the C++ classes. This is a design decision that is completely orthogonal to the language. There’s been discussion about making llvm-c a thin (machine-generated) wrapper around a stable C++ interface to core LLVM functionality. That’s probably the direction that the project will go eventually, once someone bothers to do the work.
I’ve been discounting memory management because it can be foisted off onto the user. On the other hand something like register or memory passing or how x86-64 uses SSE regs for doubles cannot be done by the user unless you want to manually generate calling code in memory.
You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.
Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.
There are fewer C programmers than C++ programmers these days.
I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.
Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name.
I don’t know the requirements for deploying with the ObjC runtime. Still, nice!
I’ve been discounting memory management because it can be foisted off onto the user.
That’s true only if you’re bridging two languages with manual memory management, which is not the common case for interop. If you are exposing a library to a language with a GC, automatic reference counting, or ownership-based memory management then you need to handle this. Or you end up with an interop layer that everyone hates (e.g JNI).
Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.
Which works for simple cases. For some counterexamples, C has _Complex
types, which typically follow different rules for argument passing and returning to structures of the same layout (though they sometimes don’t, depending on the ABI). Most languages don’t adopt this stupidity and so you need to make sure that your custom C parser can express some C complex type. The same applies if you want to define bitfields in C structures in another language, or if the C structure that you’re exposing uses packed
pagmas or attributes, uses _Alignas
, and so on. There’s a phenomenal amount of complexity that you can punt on if you want to handle only trivial cases, but then you’re using a very restricted subset of C.
JNI doesn’t allow calling arbitrary C functions, it requires that you write C functions that implement native
methods on a Java object. This scopes the problem such that the JVM needs to be able to handle calling only C functions that use Java types (8 to 64-bit signed integers or pointers) as arguments return values. These can then call back into the JVM to access fields, call methods, allocate objects, and so on. If you want to return a C structure into Java then you must create a buffer to store it and an object that owns the buffer and exposes native
methods for accessing the fields. It’s pretty easy to use JNI to expose Java classes into other languages that don’t run in the JVM, it’s much harder to use it to expose C libraries into Java (and that’s why everyone who uses it hates it).
I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.
If you have a stable C++ API, then bridging C++ provides you more semantic information for your compat layer than a C wrapper around the stable C++ API would. Take a look at Sol3 for an example: it can expose C++ objects directly into Lua, with correct memory management, without any C wrappers. C++ libraries often conflate a C API with an ABI-stable API but this is not necessary.
I don’t know the requirements for deploying with the ObjC runtime. Still, nice!
The requirements for the runtime are pretty small but for it to be useful you want a decent implementation of at least the Foundation framework, which provides types like arrays, dictionaries, and strings. That’s a bit harder.
I don’t know. I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility. For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!
Fair enough, I didn’t know that about JNI. But that’s actually a good example of the notion that a binding language needs to have a good semantic match with its target. C has an adequate to poor semantic match on memory management and any sort of higher-kinded functions, but it’s decent on data structure expressiveness and very terse, and it’s very easy to get basic support working quick. C++ has mangling, a not just platform-dependent but compiler-dependent ABI with lots of details, headers that often use advanced C++ features (I’ve literally never seen a C API that uses _Complex - or bitfields) and still probably requires memory management glue.
Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is. At most it makes it a bit awkward. Getting Qt bound is an epic odyssey.
I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility
I’m coming from the perspective of having written interop layers for a few languages at this point. Calling conventions are by far the easiest thing to do. In increasing levels of difficulty, the problems are:
C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.
For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!
It does, because it’s an EDSL in C++, but that code could be mechanically generated (and if reflection makes it into C++23 then it can be generated from within C++). If you pass a C++ shared_ptr<T>
to Sol3, then it will correctly deallocate the underlying object once neither Lua nor C++ reference it any longer. This is incredibly important for any non-trivial binding.
Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is.
Most languages are not ‘vaguely C-like’. If you want to use GTK from Python, or C#, how do you manage memory? Someone has had to write bindings that do the right thing for you. From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros (which are far harder to get to work than C++ templates - we have templates working for the Verona C++ interop layer but we’re punting on C macros for now and will support a limited subset of them later). This typically requires hand writing code at the boundary, which is something that you really want to avoid.
Last time I looked at Qt, they were in the process of moving from their own smart pointer types to C++11 ones but in both cases as long as your binding layers knows how to handle smart pointers (which really just means knowing how to instantiate C++ templates and call methods on them) then it’s trivial. If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you. If you’re something more like the Verona interop layer then you embed a C++ parser / AST generator / codegen path and make it do it for you.
I’m coming from the perspective of having written interop layers for a few languages at this point.
Yeah … same? I think it’s just that I tend to be obsessed with variations on C-like languages, which colors my perception. You sound like you’re a lot more broad in your interests.
C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.
I don’t agree. Memory management is annoying, sure, and having to look up string ownership for every call gets old quick, but for a stateful UI like GTK you can usually even just let it leak. I mean, how many widgets does a typical app need? Grab heaptrack, identify a few sites of concern and jam frees in there, and move on with your life. It’s possible to do it shittily easily, and I value that a lot.
If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you.
Hey, no shade on SWIG. SWIG is great, I love it.
From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros
Nah, it’s really only a few macros, and they do fairly straightforward things. Last time I did GTK, I just wrote those by hand. I tend to make binders that do 90% of the work - the easy parts - and not worry about the rest, because that conserves total effort. With C that works out because functions usually take structs by pointer, so if there’s a weird struct that doesn’t generate I can just define a close-enough facsimile and cast it, and if there’s a weird function I define it. With C++ everything is much more interdependent - if you have a bug in the vtable layout, there’s nothing you can do except fix it.
When I’ll eventually want Qt in my current language, I’ll probably turn to SWIG. It’s what I used in Jerboa. But it’s an extra step to kludge in, that I don’t particularly look forward to. If I just want a quick UI with minimal effort, GTK is the only game in town.
edit: For instance, I just kludged this together in half an hour: https://gist.github.com/FeepingCreature/6fa2d3b47c6eb30a55846e18f7e0e84c This is the first time I’ve tried touching the GTK headers on this language. It’s exposed issues in the compiler, it’s full of hacks, and until the last second I didn’t really expect it to work. But stupid as it is, it does work. I’m not gonna do Qt for comparison, because I want to go to bed soon, but I feel it’s not gonna be half an hour. Now to be fair, I already had a C header importer around, and that’s a lot of time sunk into that that C++ doesn’t get. But also, I would not have attempted to write even a kludgy C++ header parser, because I know that I would have given up halfway through. And most importantly - that kludgy C header importer was already practically useful after a good day of work.
edit: If there’s a spectrum of “if it’s worth doing, it’s worth doing properly” to “minimal distance of zero to cool thing”, I’m heavily on the right side. I think that might be the personality difference at play here? For me, a binding generator is purely a tool to get at a juicy library that I want to use. There’s no love of the craft lost there.
So does plasma support Electron/Swift/Java/Kotlin? I know electron applications run on my desktop so I assume you mean directly as part of the desktop. If so that is pretty cool. Please forgive my ignorance, desktop UI frameworks are way outside my usual area of expertise.
I only minimally use KDE on the computers at my university’s CS department, but I’ve been using cinnamon for almost four years now. I think that Plasma wins in the customizable aspect. There is just so many things that can be adjusted.
Cinnamon on the other hand feels far more polished, with fewer options for customization. I personally use cinnamon with Arch, but when I occasionally use Mint, the full desktop with all of mint’s applications is very cohesive and well thought out, though not without flaws.
I sometimes think that cinnamon isn’t evangelized as frequently because it’s well enough designed that it sort of fades into the background while using it
I’ve used Cinnamon for years, but it inevitably breaks (or I break it). I recently looked into the alternatives again, and settled on KDE because it looked nice, it and Gnome are the two major players so things are more likely to Just Work, and it even had some functionality I wanted that Gnome didn’t. I hopped back to Cinnamon within the week, because yeah, the papercuts. Plasma looks beautiful in screenshots, and has a lot of nice-sounding features, but the moment you actually use it, you bang your face into something that shouldn’t be there. It reminded me of first trying KDE in the mid-2000s, and it was rather disappointing to feel they’ve been spinning in circles in a lot of ways. I guess that isn’t exactly uncommon for the Linux desktop though…
I agree with your assessment of Plasma and GNOME (Shell). Plasma mostly looks fine, but every single time I use it–without fail–I find some buggy behavior almost immediately, and it’s always worse than just having misaligned labels on some UI elements, too. It’s more like I’ll check a setting checkbox and then go back and it’s unchecked, or I’ll try to put a panel on one or another edge of the screen and it’ll cause the main menu to open on the opposite edge like it looped around, or any other number of things that just don’t actually work right. Even after they caved on allowing a single-key desktop shortcut (i.e., using the Super key to open the main menu), it didn’t work right when I would plug/unplug my laptop from my desk monitors because of some weirdness around the lifecycle of the panels and the main menu button; I’d only be able to have the Super key work as a shortcut if it was plugged in or if it was not, but not both. That one was a little while ago, so maybe it’s better now.
Ironically, Plasma seems to be all about “configuration” and having 10,000 knobs to tweak, but the only way it actually works reasonably well for me is if you don’t touch anything and use it exactly how the devs are dog-fooding it.
The GNOME guys had the right idea when it came to stripping options, IMO. It’s an unpopular opinion in some corners, but I think it’s just smart to admit when you don’t have the resources to maintain a high bar of quality AND configurability. You have to pick one, and I think GNOME picked the right one.
I have never understood why KDE isn’t the default VM for any serious linux distribution.
Me neither, but I’m glad to hear it is the default desktop experience on the recently released Steam Deck.
Do SUSE/OpenSUSE not count as serious Linux distributions anymore?
It’s also the default for Manjaro as shipped by Pine64. (I think Manjaro overall has several variants… the one Pine64 ships is KDE-based.)
Garuda is also a serious Linux distribution, and KDE is their flagship.
I tried to use Plasma multiple times on Arch Linux but every time I tried it turned out to be too unstable. The most annoying bug I remember was that kRunner often crashed after entering some letters, taking down the whole desktop session with it. In the end I stuck with Gnome because it was stable and looked consistent. I do like the concept of Plasma but I will avoid it on any machine I do serious work with.
I can only wonder if “the complete idiot” refers to the person who would actually run OpenBSD on a Pinebook Pro :)
I remember reading about the state of this exact setup a few years ago, and I’m a little sad to see that it hasn’t come further. Some of the stuff the blog post mentions are quite basic, like powering off the device, or suspend/resume.
This makes such a setup completely unusable for any serious activity.
A shame really.
It’s my guide, so I’m the complete idiot, no question at all :)
My (perhaps naive) hope is that this writeup might help more people get into this (it’s the guide I would have loved when I started figuring out all the bits and pieces, and it would have spared me a lot of time). So maybe the number of people looking at the issues will increase. I would (also naively) assume that since the hardware is (supposed to be) open, adding support for the missing stuff, at least for the small-ish ones, should not be difficult. But it definitely takes knowledgeable developers, and I imagine they have better things to do (I know I have).
Whether it’s usable or not, serious or not, I guess it’s in the eye of the beholder. I like it, it’s quite snappy with a tiling WM, and I enjoy the bloat-free experience of OpenBSD. Here’s hoping that it improves.
Thanks for replying!
I guess I’m just a little bit disillusioned with the state of things. I’m glad that you are pushing the agenda with detailed write ups like these. Here’s to hoping that things improve!
I’ve been working on moving my dedicated server to a new dedicated server. The back story is really that Hetzner raised the prices for colocation in Germany, and I’ve bought a new one in their Helsinki datacenter.
I’m not really running anything serious on it, except a few websites and blogs and so on, so in theory that migration could be done in short time, but I’ve been wanting to try a few things out to fill out a few blanks in my skillset, so I’ve been playing with Ansible, and the server runs proxmox with LXC.
Oh yeah, and IPv6.
How would you describe the cost to performance to maintenance burden for that ? On one side you can just buy something and let it run, on the other side you’ll have to buy something, and if it breaks you may have to replace the whole server. Also drive failures..
Well, you’re not exactly right when it comes to the maintenance burden. Yes, it’s a physical server, so physical things might break, but since it’s hosted at Hetzner, they will fix it for me, and they have been quite responsive in the past when needed.
There’s no doubt that I could run this whole thing on a VM much cheaper, but it’s not that expensive. I don’t run anything really intensive on it either, but I like the piece of mind that this is not a shared platform, and I can do whatever the hell I want on it. Including testing weird apps I’m writing, spinning up VMs, running a remote desktop environment or whatever. It’s my small cloudless corner where I set the rules (within reason, of course)
Seems quite close to Python’s type hints. Not mandatory to use at all, but if used correctly, it massively helps you find bugs.
Dynamic typing.
It’s proponents claimed it superiority for years… And now all the mainstream dynamic languages try to add at least some static types. But you can’t just put it there easily, it needs to be baked into the heart of the language.
Languages like C++, Java and C# have been getting welcome additions like var, auto and polymorphic lambdas. There is virtually no modern language that requires you to specify the type of your iterator and good riddance, too. Let’s say that there is a convergence.
Var, diamond etc. means that your code is still statically typed. You just don’t need to write the type by hand because it’s obvious for the compiler.
It can look similar to dynamic languages… But type inference is static typing to it’s core (it isn’t by chance this comes from ML family).
Moving static typing to tooling and giving hints to help that tooling is part of the convergence just like statically typed languages losing boilerplate is. There is no pedestal you can climb to say “they were wrong we were right all along”.
No. They were wrong. And that’s why they now need to add boilerplate.
You need to be statically typed to be safe and you need a good type system to be safe and remove boilerplate.
Most people familiar with mechanical keyboards are familiar with QMK. It’s really a great project, and it’s not very hard to make your own super advanced keyboard layout with macros and multiple keyboard layers.
Their web based keyboard configurator is really slick too!
It is cool indeed.
Oddly enough, nobody is making standard, full sized mechanical keyboards with qmk as stock. That’s a gold mine waiting to happen, for the first to market.
I fear some company will accidentally make a keyboard using a compatible arm or avr microcontroller, the community will pick up on it and it’ll thus get undeservedly popular, instead.
An inline controller can be used to turn a non-programmable (full-size) keyboard into a programmable one. E.g. https://www.1upkeyboards.com/shop/controllers/usb-to-usb-converter/ this one runs tmk, which is what QMK was originally forked from.
I like how, with qmk, it is possible to reduce latency. An inline controller will only add latency.
Sure it’s definitely a work around. Figured it was one way to get access to more programmable full size keyboards. Another option is to replace the controller in an existing board. I did this with my Filco TKL — it now runs TMK: http://bathroomepiphanies.com/controllers/
But does it in reality? According to Dan Luu he measured the Planck which is presumably running QMK to have 40ms latency compared to 30ms latency that e.g. my Pok3r has with its proprietary firmware or 15ms that Apple’s keyboard seems to have. Overall a pretty weak showing for QMK which is presumably due to its myriad of awesome features but that were not written with latency in mind.
I’m not sure about Dan Luu’s tested keyboard actually running qmk, or how to tweak qmk for low latency. It might have some silly slow debounce enabled by default. Or it might be qmk is hopelessly sluggish. But, at least, with source code and a friendly keyboard, it is possible to work on improving latency. If a keyboard won’t even run qmk, that’s not anywhere as good as a starting point.
None of my keyboards run it, but I plan to eventually (once it’s easier?) get a supported full size keyboard. Then I might personally take a look at it. Worst case scenario, qmk turns out to be shit but I do learn how it works, then write my own.
If it isn’t mechanical, it won’t appeal to the sort of people who care about what keyboard they use.
Reprogrammable, high-quality, low-volume. mechanical keyboards cost more to produce than conventional keyboards do. When cost is a consideration in this way, you’re more likely to think critically about each piece of the design and question whether the value you get from it will justify the additional cost, rather than saying “let’s just do an exact copy of the 101-key design that IBM standardized in the 1980s”.
Given this dynamic, it’s very unsurprising to me that no one builds “full size” designs this way.
There’s plenty of full size mechanical keyboards in the market. They also sell well, to gamers.
Picking a qmk friendly microcontroller and preparing a friendly firmware update method can’t be that hard, much less increase the cost in any significant way.
It just hasn’t been done.
I disagree. I use the Microsoft Ergonomic keyboard as my preferred daily driver but would enjoy having QMK onboard. I’ve used a mechanical keyboard before but buying/building a shaped, tilted, split design mechanical is really expensive. The Ergodox EZ looks great, but it’s a month’s rent, ~5-6x the Microsoft keyboard.
kbdfans has one, but it’s sold out: https://kbdfans.com/collections/fully-assembled-keyboard/products/fully-assembled-kbd19x-keyboard
This is obviously bad news for the elderly and the people who are not very tech literate.
While visiting my elderly parents recently, I noticed how good they were at spotting spam texts, but were using the poor grammar and spelling to spot them.
LLMs take that whole detection vector away, and leave elderly people a lot more vulnerable to scams.
It’s a lot harder to teach people who are less comfortable with technology about typo-squatting, SSL certificates and what not, and a lot of the “bad feelings” we experience when being contacted in ways that should not be expected from e.g. banks or amazon rely on our own literacy in tech.