1. 48
  1. 12

    I’ve wrapped quite a few C APIs in Rust, and for decently written C libraries it works out quite well actually (and bad libraries are bad in any language). It’s a bit laborious, because it can’t be completely automated, but I like the results.

    Initialization and freeing of library’s handles map nicely to Foo::new() and automatic Drop. C documentation of ownership (“this pointer is temporary” vs “you must free this” and “don’t call anything after destroy(handle)) can be made enforceable by the borrow checker. Similarly Rust can encode which functions are thread-safe and which aren’t, and guard that. So even C code can be protected from misuse, and therefore be safer to use, without Rewriting It In Rust™.

    1. 11

      I’m hitting this hard right now with GUI programming. The only good cross platform options still seem to be “Use C++ and QT” or “Use C and GTK+”.

      Edit: Suggestions welcome…

      1. 7

        Why not use a binding that does the grunt work of interfacing with C or C++? There are two good Qt bindings for Python (PyQt and PySide). There are good bindings for Gtk+ for many languages. (Though Gtk+ is pretty horrible outside X11/Wayland.)

        1. 6

          Red language has a native cross-platform (Win32, Cocoa and GTK backends) GUI system with declarative DSL on top of it.

        2. 8

          I enjoyed this writeup, and have felt this pain.

          One of the best examples of this for me is interop between C/C++ or C++ and other languages, due to the way scoping and headers/templates work.

          Interfacing with C is tedious and detail-oriented, but semantics around memory (mis)usage are obvious and the data structures are just piles of bytes since the metaprogramming facilities are so limited–this incidentally is a great thing about doing Lua/C interop.

          A language like C++ where there can be so much magic going on due to templates and scoping and other things can make doing bindings in another language downright nightmarish.

          1. 2

            I absolutely love interop between Lua and C, it’s no surprise to me that it’s so popular for embedding.

          2. 6

            This post could not come at a better time. Recently I’ve created an application using a mix of languages 100% geared toward solving their particular problems.

            • posix shell (dash) for process and io management.
            • sql (sqlite3) for all database communications.
            • and “cpc” (custom program code) (nodejs + firefox) for everything else.

            I’ve spent maybe a total 4 hours of actual coding, the rest thinking in this new paradigm. Everything is just beautifully fitting together. No worry about APIs, or bridges, or bindings. Just output what other programs expect as input. Use the programs that exist. If they don’t: write them in the language that makes the most sense.

            In a sense, shell code is the ether of the Universe. Everything else are particles.

            Take this quote:

            The only good cross platform options still seem to be “Use C++ and QT” or “Use C and GTK+”.

            Here the goal is “GUI to do X”. So you’d pick any combination of tech that makes sense here. But say you want to save this information within the application to an sql database. Well you would have it pipe SQL statements out into sqlite3 or a database pipe. It’s so beautiful.

            I’m not sure if bash or whatever shell can do cyclic or selective piping but that would be even crazier. Until then, spawning a curl process from within your C/GTK application would be sufficient to do http requests for example. No bindings.

            What I’m trying to say I think is: processes instead of all-in-one-programs really are the way to go. They are more flexible and robust.

            I want to add using shell really forces you to define the major paths in your application. It’s nice.

            1. 3

              There is something ironic about using SQLite, a library explicitly designed for embedded/in-process usage, by piping SQL statements into a separate SQLite process. :-) Does that work well for large INSERTs, then? And how do you get the data back out of the database and into your main process, do you have to parse textual output or does SQLite support other forms of output that are faster/more reliable to parse? I am fascinated, and will read anything you feel like writing about your ideas, choices, and experiences with this program.

              1. 3

                Like so:

                sqldb list_trees.sql.sh "$@" | myprogram.sh parse-dna | sqldb insert_dna.sql.sh "$@"

                Where sqldb is a very, VERY simple shell function. I’m talking 5 lines.

                You would then invoke this like so: myprogram.sh update-dna .

                So. Damn. Simple. And I’m only discovering more simple ways and the benefits. Like I can swap out the underlying database by modifying sqldb. Like I can make it use a postgresql unix socket instead to be way more performant. :) I’m only using sqlite3 because it’s self-contained, but I understand it’s supposed to be “embedded” too.

                does SQLite support other forms of output

                Yes! It supports csv!

                1. 1

                  I’m only using sqlite3 because it’s self-contained, but I understand it’s supposed to be “embedded” too.

                  I, too, use SQLite a lot as a stand-alone program – I’ll bet SQLite’s self-containedness has been as important to its success as its embeddability. Still ironic ^_^

                  Yes! It supports csv!

                  Wonderful, that’s good enough for nearly everything one might want to do. I do prefer other, typed, data formats when they’re available (reading CSV and having to turn strings/ints back into dates has gotten old for me), but that’s neither here nor there, streaming CSV from process to process is good stuff.

                  I’ve written a similarly-structured project; in my case the separate programs didn’t stream to each other, but wrote/read intermediate files, because that allowed me to use a Makefile for one-month-at-a-time processing. I mostly liked the experience, but I have to say passing data between process doesn’t feel very different from passing data between pure functions. The separate processes forced me to be more explicit about the intermediate artefacts; but handling state as intermediate files added a lot of ‘check whether file exists’ logic to the code.

            2. 5

              C# has tools to ease his pain, but the C API needs to be written with wrappability in mind.

              Once I needed to do some image simple processing, and everybody suggested I use ImageMagick/GraphicsMagick as a de-facto standard. I tried it (despite my bad opinion on it based on its security track for example), but its API was totally … un-idiomatic from any point of view, to put it mildly.

              On the other hand i found libGD, which was simple to wrap, and its workflows were easy to use in either language. It wasn’t super-idiomatic C# code when wrapped, but it was fine and easy to work with.

              1. 1

                Were you trying to wrap ImageMagick’s C API in C#? Or were you using Magick.NET? A quick glance at Magick.NET and it looks like a fairly idiomatic C# API to me (it’s all objects and methods and some of the example code uses C# constructs with no straightforward C equivalent like with) but I’m not a C# developer.

                1. 3

                  I checked Magick.NET and I was not impressed by it. It needed a global imagemagick install on the machine the code will be run, if I remember well, and it was a no go in my case. I was simpler to compile GD and ship the .dll/.so, and write the few lines of P/Invoke shim I needed.

                  Also ImageMagick had some security issues in those times, and I decided I’d rather avoid it. But this was quite a few years ago.

                  I can also remember that some global static stuff is used by ImageMagick, and I have to call some initializer method, which smells like bad software engineering in case of image manipulation in my opinion…

                  P/Invoke makes crossing this boundary relatively painless, as C# was designed around the idea (AFAIK it was made because Sun didn’t like Microsoft’s extensions to Java which made COM interop simpler)

                  Nowadays https://github.com/SixLabors/ImageSharp/ is the best to go with if using dotnet. It was not a thing back then.

              2. 3

                I don’t really like this boundary, and I think most programmers who have worked here would agree. If you like C, you’re stuck either writing bad C code or using poorly-suited tools to interface badly with an otherwise good API. If you like $X, you’re stuck writing very non-idiomatic $X code to interface with a foreign system.

                I don’t like the boundary either, but I guess the boundary is going to be the lowest common denominator between the languages that have to interface? And perhaps we should be happy that UNIX uses a C-based ABI (albeit it could be safer). Imagine some object system at the lower level and having to communicate with that from a non-OO language. I don’t know much about Windows, but I guess COM was/is an attempt to define a different kind of interface (based on objects)?

                1. 9

                  I don’t know much about Windows, but I guess COM was/is an attempt to define a different kind of interface (based on objects)

                  Not merely an attempt! COM was (is) a remarkable and runaway success, all things considered.

                  1. 2

                    Can you elaborate a bit more? I programmed a bit of COM, so I know some basics, but I never really thought much of or about it, so I’m quite curious of your more shaped opinions, e.g. what you find remarkable and interesting/valuable in it, or what it did right, and also what do you think it could do better and how?

                    1. 3

                      COM is a big part of how the non-C/C++ world interacts with a lot of Microsoft functionality. It’s how VBA, VB6, and .NET languages interact with MS-office, and probably with a lot of other windows management APIs.

                      That being said, I don’t know where to find good docs on it these days, which would be handy given that it was in a decent amount of use on a work project. Ironically, we’ve been porting the bits of the application that used COM to interop with a different product in our company to just do the same things in C#, since that’s easier than figuring out what went has gone wrong as the bits and pieces of that wrapper code have suffered from the other product shifting underneath.

                      1. 2

                        I may have not stated this clearly: I’m not really interested in what COM is, as this part I know; I’m interested in, basically, its pros & cons over C-like ABI protocols (ignoring small details like cdecl vs stdcall), in eyes of people who worked with them enough to have opinions on that.

                        1. 5

                          It’s the layer on top of the basic function call ABI that makes it more powerful: functions grouped into interfaces on reference-counted objects, then standard interfaces for things like reflection and remoting. It’s standardized interop at the level of objects, properties, and methods rather than individual functions. So VB.NET can map any other system’s object model into its object model and make it look “natural”. (Obviously with limitations, just as function call ABI has limitations.)

                          1. 1

                            Ah, I see. I don’t have enough experience with either to have very informed opinions, other than to say that I don’t know where to find good COM documentation, where the C-like ABI protocols are usually documented decently well, if a bit tedious to work through.