1. 21
  1.  

  2. 3

    Hi, I’ve just published the first release of this program. It is open source released under the Apache 2.0 license.

    From the linked document:

    Cedro is a C language extension that works as a pre-processor with four features:

    • The backstitch @ operator.
    • Deferred resource release.
    • Block macros.
    • Binary inclusion.

    The source code archive and GitHub link can be found here: https://sentido-labs.com/en/library/

    Edit: in some machines, the miniz library does not compile with -Wsign-conversion: you can get it to work by removing that option from CFLAGS in Makefile. This affects only cedro-new: both cedro and cedrocc compile before that error.

    1. 4

      This looks neat, I have a few comments:

      Why does the deferred resource release not use __attribute__((cleanup))? It can generate code that is correct in the presence of exceptions, whereas the output code here still leaks resources in the presence of forced stack unwinding. Is it just that you’re doing token substitution (__attribute__((cleanup)) takes a pointer to the cleanup object, so you can have a single cedro_cleanup function that takes a pointer to a structure that contains explicit captures and a function to invoke). The choice of auto for this also means that it cannot be used in headers that need to interoperate with C++ - was that an explicit design choice?

      Similarly on the lowering, binary includes are a much-missed feature in C/C++, but the way that you’ve lowered them is absolutely the worst case for compiler performance. The compiler needs to create a literal constant object for every byte and an array object wrapping them. If you lower them as C string literals with escapes, you will generate code that compiles much faster. For example, the cedro-32x32.png example lowered as "\x89\x50\x4E\x47\0D\x0A\x1A..." will be faster and use less memory in the C compiler. I’m not sure I understand this comment though:

      The file name is relative to the current directory, not the C file, because usually binary files do not go next to the source code.

      What is ‘the current directory’? The directory from which the tool is invoked? If so, that makes using your tool from any build-system generator annoying because they tend to prefer to expand paths provided to tool invocations to absolute paths to avoid any reliance on the current working directory. I don’t actually agree with the assertion here. On every codebase I’ve worked where I’ve wanted to embed binaries into the final build product, those binaries have been part of the source tree so that they can be versioned along with the source.

      1. 3

        It can generate code that is correct in the presence of exceptions

        It’s a C preprocessor and C does not have exceptions

        1. 2

          C has setjmp(3) and friends as a very raw, low-level exception mechanism. It’s basically the same underpinnings, with a much less developer-friendly interface. Still, real C code does make use of it!

          1. 1

            And __attribute__((cleanup(..))) doesn’t work with longjmp. Not even C++ destructors run if you longjmp out of a scope. (Both destructors and __attribute__((cleanup(..))) run when unrolling due to an exception though.)

            C’s longjmp doesn’t do any stack unrolling, it essentially just sets the instruction pointer and stack pointer in a way which breaks lots of stuff. That’s not to say it’s not used though; I’ve encountered it myself with the likes of libjpeg.

          2. 2

            C does not have exceptions, but C often lives in a world where exceptions can be thrown. If your C code invokes callbacks from non-C code, it often has to handle exceptions being thrown through C stack frames, even if the C code itself doesn’t handle them. Writing exception-safe code is one of the main motivations for __attribute__((cleanup)) existing.

            1. 2

              Is it well defined behavior to throw C++ exceptions across C stack frames? For some reason I thought that was UB.

              1. 2

                It’s certainly not defined behaviour, since the C++ standard doesn’t (naturally enough) concern itself with specifying what happens when an exception passes into code written in another language.

                In practice, functions in C may not have unwind information and an exception that propagates into them will be treated the same as an exception that (would) propagate out of a noexcept-attributed C++ function. (Off the top of my head, I think the result is that std::terminate() is called).

                However, it is possible to compile C with unwind info (eg gcc has -fexceptions to enable this), and in that case implementations will allow propagating an exception through the C functions. At some point the exception still needs to be caught, which you can only do in C++ (or at least non-C) code, so this is only something that’s needed when you have a call pattern like:

                C++: main() {
                         -> foo() [call into C code]
                C:   foo() {
                         -> bar() [call into C++ code]
                C++: bar() {
                         (throws exception)
                
                1. 2

                  Well-defined in what context? In the context of the C specifications, any interaction with code not written in C is not well defined. In C++, functions exposed as extern "C" impose some constraints on the ABI, but no strong definitions. All (well, almost all) functions that are part of the C standard are also part of the C++ standard. This includes things like qsort, and qsort in C++ explicitly permits the callback function to throw exceptions.

                  More relevant, perhaps, are platform ABI standards. On Windows, throwing SEH and C++ exceptions through any language’s stack frames is well-defined behaviour and the C compiler provides explicit support for them via non-standard extensions.

                  Most *NIX platforms use the unwind standard from the Itanium C++, which defines a Base ABI that is language agnostic and layers other things on top. This defines DWARF metadata for each frame that tells the compiler how to unwind. You can write these by hand in assembly with the .cfi_ family of directives. If a frame doesn’t contain any code that needs to run during stack unwind, this information just describes how to restore the stack pointer, what the return address is, and where any callee-save registers that the function used were stashed. The unwinder can then walk the stack and restore the previous frame’s expected register set and recurse. With GCC and Clang, __attribute__((cleanup)) emits the same metadata that a C++ destructor would in these tables, so the code in the cleanup function is run during stack unwinding. You can use this to do things like release locks and deallocate memory in C if a callback function that you invoke from any language that has exceptions throws an exception through your stack frame. Note that this doesn’t let you catch the exception. There’s no major reason why you shouldn’t be allowed (from C) to block exception propagation, though if you need this then invoking a callback via a tiny C++ shim that does try { callback() } catch (...) {} will prevent any exceptions (even non-C++ ones that use the same base ABI) from propagating into the C code.

            2. 2

              Why does the deferred resource release not use __attribute__((cleanup))?

              Because that is compiler-dependant, at least for now. I want something that would work where a newer compiler is not available. Also, the current mechanism in cedro allows code blocks with or without conditionals, which are more flexible than the single-argument function required by __attribute__((cleanup)), unless I misunderstand that feature. I’ve actually never used variable attributes in my own programs, only read about it, so I might be missing something.

              The choice of auto for this also means that it cannot be used in headers that need to interoperate with C++ - was that an explicit design choice?

              It wasn’t, the reason was to avoid adding more keywords that would be either prone to collisions, or cumbersome to type. For use with C++, you could write the output of cedro to an intermediate file which would be standard C. I’ll have to think about it in more detail to see how much of a problem it is in practice.

              the way that you’ve lowered them is absolutely the worst case for compiler performance. The compiler needs to create a literal constant object for every byte and an array object wrapping them. If you lower them as C string literals with escapes, you will generate code that compiles much faster. For example, the cedro-32x32.png example lowered as “\x89\x50\x4E\x47\0D\x0A\x1A…” will be faster and use less memory in the C compiler.

              I did not realize that, you are right of course! I know there are limits to the size of string literals, but maybe that does not apply if you split them. I’ll have to check that out.

              EDIT: I’ve just found out that bin2c (which I knew existed but haven’t used) does work in the way you describe, with strings instead of byte arrays: https://github.com/adobe/bin2c#comparison-to-other-tools It does mention the string literal size limit. I suspect you know, but for others reading this: the C standard defines some sizes that all compilers must support as a minimum, and one of them is the string literal maximum size. Compilers are free to allow bigger tokens when parsing.

              I’m concerned that it would be a problem, because as I hinted above my use case includes compiling on old platforms with outdated C compilers (sometimes for fun, others because my customers need that) so it is important that cedro does not fail any more than strictly necessary when running on unusual machines.

              Thinking about it, I could use strings when under the length limit, but those would be the cases where the performance difference would be small. I’ll keep things like this for now, but thanks to you I’ll take these aspects into account. EDIT END.

              What is ‘the current directory’? The directory from which the tool is invoked? If so, that makes using your tool from any build-system generator annoying because they tend to prefer to expand paths provided to tool invocations to absolute paths to avoid any reliance on the current working directory. I don’t actually agree with the assertion here. On every codebase I’ve worked where I’ve wanted to embed binaries into the final build product, those binaries have been part of the source tree so that they can be versioned along with the source.

              I see, that’s again something I’ll have to consider more carefully. I keep the binaries separated from the source code, but it would make sense to put things like vertex/fragment shaders next to the C source.

              Thank you very much for your detailed review.

              1. 1

                Also, the current mechanism in cedro allows code blocks with or without conditionals, which are more flexible than the single-argument function required by __attribute__((cleanup)), unless I misunderstand that feature. I’ve actually never used variable attributes in my own programs, only read about it, so I might be missing something.

                The attribute takes a function, the function takes a pointer. That’s sufficient to implement a closure. For example, you could transform:

                int a;
                int b;
                auto a += b;
                

                Into something like this:

                // These can be in a header somewhere
                struct cedro_cleanup_capture
                {
                  void(*destructor)(struct cedro_cleaup_capture*);
                  void *captures[0];
                };
                void cedro_cleanup(void **p)
                {
                  struct cedro_cleanup_capture **c = (struct cedro_cleanup_capture **c)p;
                  (*c)->destructor(*c);
                }
                
                // Generated at the top-level scope by Cedro
                static void __cedro_destructor_1(struct cedro_cleanup_capture *c)
                {
                  // Expanded from a += b
                  *((int*)c->captures[0]) += *((int*)c->captures[1]);
                }
                
                ...
                
                int a;
                int b;
                // Generated at the site of the `auto` bit by cedro:
                __attribute__((cleanup(cedro_cleanup)))
                struct { void (*destructor)(struct cedro_cleanup_capture); void *ptrs[2] } = { __cedro_destructor_1, {&a, &b} };
                

                Now you’ve got your arbitrary blocks in the cleanups. If your compiler supports the Apple blocks extension then this can be much simpler because the compiler can do this transform already.

                It wasn’t, the reason was to avoid adding more keywords that would be either prone to collisions, or cumbersome to type. For use with C++, you could write the output of cedro to an intermediate file which would be standard C. I’ll have to think about it in more detail to see how much of a problem it is in practice.

                The best way of doing this is to follow the example of Objective-C and use a character that isn’t allowed in the source language to guard your new keywords. A future version of C may reclaim auto in the same way that C++11 did, and some existing C code uses it already, so there’s a compatibility issue here. If you used $auto then it would not conflict with any future keyword or identifier.

              2. 1

                What is ‘the current directory’? The directory from which the tool is invoked? If so, that makes using your tool from any build-system generator annoying because they tend to prefer to expand paths provided to tool invocations to absolute paths to avoid any reliance on the current working directory. I don’t actually agree with the assertion here.

                After thinking about it, my conclusion is that you are right, so I have changed the program: now the binary file is loaded relative to the including C source file.

                1. 1

                  Thanks! What are you currently using this for? The place I would imagine it being most useful is for tiny embedded systems that have a C compiler but no C++ compiler. Firmware blobs that want embedding in the final binary are pretty common there.

                  1. 1

                    Well, today I’m continuing work on source code manipulation tools using tree-sitter, which is a parser generator that outputs C parsers. I started with the Rust wrapper but some of the machines where I would like to run it do not have a Rust compiler, some because of the OS, others because of the CPU ISA.

                    What I’m doing is exploring how much I can cut out dependencies and remain productive. Dependency hell is manageable for a full-time job, but for anything else which I revisit only occasionally it is not acceptable to get derailed by something that used to work, but they changed it and now it does not anymore, and you can not get back to a previous version because of a tangle of up-/downstream dependencies.

                    The use case of resource-limited machines like microcontrollers and retrocomputers is also a goal: I hope to resume work in that respect soon; like many people, I have a bunch of such machines lying around waiting for me to find some time for them. The intention is that a simpler build chain should make that easier to do as a spare-time job.

                    And then, binary includes are very useful to cut down on dependencies even on modern machines: one example is for simple GUI applications, where using nanovg and embedding the fonts and images I can get an executable that does not require installation, and depends only on glibc and the various libGL*, libglfw libraries which works well in practice for me. I find this much easier to keep portable than using big complex GUI frameworks, which I admit provide lots of difficult-to-implement features: for some programs though, I find that my choice is not either minimal dependencies with spartan features or more dependencies with complete features, but either minimal dependencies or a non-compiling/non-running program.

                    1. 1

                      For pretty much anything I was using C for 10 years ago (including a Cortex-M0 with 8 KiB of RAM) I’m using C++. C++17 is available on any system with GCC or Clang support. The C++ standard library is sufficiently ubiquitous that it counts as a dependency in the same way that the C standard library does: it’s there on anything except tiny embedded things. It can easily consume C APIs and with modern C++ there are a lot of useful things for memory management and so on.

                      1. 1

                        I do see your point, and I’ve used C++ for decades and expect to keep using it in the future: the improvements in the last years after the stagnation period have made it much more comfortable to use.

              3. 1

                This is really great! I love it!

                1. 3

                  Thanks, I would like to hear about your experience, positive or negative, once you get to try it out.

              4. 1

                The block macro and binary file inclusion would save so many headaches for many people. This looks great! Good work!

                1. 1

                  Neat! Objective-C also got its start as a preprocessor (also using the @ token.) And so did C++, come to think of it.

                  Why did you decide to use the existing “auto” for cleanup, instead of a new word like the de-facto standard “defer”? Just to avoid adding any reserved words to the language?

                  1. 1

                    :-) My purpose is explicitly not to build this up into a new language, because if you can use a new compiler there are better options out there already.

                    As for using auto, you are exactly right: it was to avoid adding keywords.

                    1. 2

                      Why use keywords in general? For a preprocessor or wouldn’t it make more sense to add a #defer?

                      1. 2

                        There is the issue that existing editor modes are likely to mis-indent such lines. It would be a deal breaker for me to require custom editor settings.