1. 8

    I honestly don’t expect to choose C++ over C for any project again. C99 has proven itself to be much more useful for my general projects.

    I guess these projects don’t deal with strings a lot…

    The defer keyword. […] That’s the premise behind smart pointers, RAII, and similar features. Zig’s method blows all of those out of the water.

    How does manual scope based destruction “blow out of the water” the automatic one (RAII)?!?

    In Rust, C++ and D (when you use std.typecons.scoped), you literally cannot forget the defer free(something), because it’s implicit.

    1. 4

      i think there’s merit in having defer instead of RAII, but it only covers a third of RAII (the part where you don’t have to drop at every exit point)

      if you had a second feature, where the compiler made sure every variable was moved before its scope ended, that’d get you the second part of RAII. the memory safety part. the most important part.

      the third part is basically generics/traits. there’s a common “drop” interface. i think it’s reasonable to skip that part if you’re going for something small and c-like (and there’s some added flexibility with pool allocators and such, if you want a drop with a params)

      1. 3

        I personally dislike having to define a class for everything that needs to be cleaned up. To be honest, I wonder if my ideal would perhaps be a hybrid of zig and rust, where its manual, but automatically checked.

        1. 1

          You can already do this in Rust, by using the #[must_use] attribute, and defining a free(self) method (or whatever) instead of implementing Drop. You do have to define a struct, though. But that’s necessary, anyways, in order to not expose memory unsafety to the caller.

          1. 4

            Or you can have a struct that holds a lambda and you literally have defer :) So defer is, in a way, a special case of RAII.

            e.g. https://docs.rs/scopeguard/

      1. 4

        rolling the negative operator into the numeric literal syntax would make me feel a lot better about removing it. it’s still annoying that it collides with binary subtraction, but it’d reduce the scope of that annoyance

        logical not, i’d also want some other language features to make up for it. like unless and until

        1. 5

          it’s still annoying that it collides with binary subtraction

          Just require spaces between the operator and the operand. I don’t understand why people don’t do that in the first place.

          1. 2

            rolling the negative operator into the numeric literal syntax would make me feel a lot better about removing it

            Ooops. Yes. I forgot to write that down. That was implied. :-(

            EDIT: Updated article.

          1. 2

            seems like a good idea for posts with a low number of votes/views, where the signal that it’s unwanted is much less clear

            more total votes/views should shorten/remove the grace period, though. if a bunch of people see it, you can be much more confident in its rating

            1. 1

              more total votes/views should shorten/remove the grace period, though. if a bunch of people see it, you can be much more confident in its rating

              yeah, the idea isn’t really fleshed out, but this would be something one could do.

            1. 18
              (๑•ᴗ•)⊃━~━☆゚cd ∆
              (๑•ᴗ•)⊃━~/∆━☆゚
              
              1. 2

                isn’t JAMStack literally every website? is that the joke? does the j stand for vanilla.js?

                js on the client, because that’s all it can run*. APIs on the backend, because that’s what we call things on the backend. and markup, because that’s how to tell the browser to render things?

                (*until wasm catches on)

                1. 3

                  My understanding of jamstack is that it is relatively focused. The key attributes are:

                  • Static pages (so no server required, other than a file system server like s3)
                  • JavaScript to add some functionality, but not to drive the whole site (so not an SPA)
                  • APIs on the back end (so again, no rendering by a server, whether of partial HTML or something not an API like turbo links)

                  This site does a good job of outlining the specifics:

                  https://jamstack.wtf/

                  1. 1

                    This is part of why I wrote the post, because the acronym can cause confusion but it depends on a static site generator (i.e. Jekyll, Hugo, Gatsby, etc.) that uses the markup to build and deploy a site.

                  1. 1

                    Is there much of a case for VLIW over SIMT/SIMD? (SIMT is the model most/all modern GPUs use, which is basically SIMD, but with conditionals that mask parts of the register/instruction, rather than the entire instruction)

                    My basic thinking is that if you have SIMD, conditional masking, and swizzling, you’re going to be able to express the same things VLIW can in a lot less instruction width. And SIMT is data-dependent, which is going to be more useful than index-dependent instructions of VLIW

                    Basically, I don’t see the case for having ~32 different instructions executing in lockstep, rather than 32 copies of one (conditional) instruction. It seems like it’s optimizing for something rare. But maybe my creativity has been damaged by putting problems into a shape that GPUs enjoy

                    1. 2

                      It is more a question between VLIW and superscalar out-of-order architectures (and not between SMT and VLIW), and there the latter ones clearly win. On a fundamental level, they are faster because they have more information at runtime than the compiler has at compile time.

                    1. 8

                      tangentially related, here’s a cool tarot deck based on shaders: https://patriciogonzalezvivo.github.io/PixelSpiritDeck/

                      1. 2

                        I own that! It’s wonderful.

                      1. 6

                        This is one of the warts in rust that contributed to me writing way less rust code

                        It was very frustrating to have the compiler reject programs that were obviously correct, and then have to model how the compiler was approximating borrow-correctness, and come up with an alternative solution that satisfied it

                        Makes me excited to get back into rust at some point in the future :)

                        1. 6

                          It has elaborate sytnax[sic]. Rules that are supposed to promote correctness, but merely create opportunity for error.

                          It would help if you could give an example. Are you talking about the MISRA-C rules? Or random rules? What? What are the rules concerning? etc. This information is so general that it can’t be countered or even grokked properly.

                          It has considerable redundancy.

                          Again, what do you mean by ‘redundancy’? Are you talking about function reuse? Reuse of if/for/while constructs? What?

                          It’s strongly typed, with a bewildering variety of types to keep straight. More errors.

                          I don’t understand this. C has three or four groups of types at best. void*/ptrdiff_t, integer, float, char*. You can convert more-or-less freely within these groups. You should take care while converting from one group to the other (For example, if converting from float to int, use lrint and friends and check for fetestexcept, etc.). This depends on knowing what you want out of the type and knowing what the type needs from you. As a rule of thumb, use size_t for iteration and indexing, if you need to return a size or an error, use ssize_t. For working with characters, your unicode library should give you a type for dealing with them and ways of converting safely between char* and whatever that type is.

                          As an infix language, it encourages nested parentheses. Sometimes to a ludicrous extent. They must be counted and balanced.

                          So does every other infix language, and so does Lisp, which isn’t infix. I feel though, that this comes down to knowing your language. Haskell, which has clever methods (like $) of forgoing parentheses, is much more difficult to follow as someone who isn’t really very familiar with it. But I’m not going to complain about Haskell having that, because it’s a feature of the language that (if I were writing Haskell code) I must learn to work with it effectively. Likewise, if you use C, you need to know, even if it is very rough knowledge, operator precedence.

                          It’s never clear how efficiently source will be translated into machine language. Constructs are often chosen because the programmer knows they’re efficient. Subroutine calls are expensive.

                          Modern Intel architectures make this pretty easy: Avoid branches, keep mind of the cache. See http://nothings.org/computer/lexing.html for an example of complex computing without branches.

                          Because of the elaborate compiler, object libraries must be maintained, distributed and linked. The only documentation usually addresses this (apparantly difficult) procedure.

                          Ehh? For a start, most other languages that are contemporary with C, do this. Most other languages that maintain compatibility with C, do this. Personally it feels more complex to have to bundle an entire runtime system with your library’s object files (See: Ada), than just distributing the libraries. But ok.

                          Code is scattered in a vast heirarchy[sic] of files. You can’t find a definition unless you already know where it is.

                          Both cscope and grep exists. Use them.

                          Code is indented to indicate nesting. As code is edited and processed, this cue is often lost or incorrect.

                          Does Rust/Ada/Lisp/Pascal/Python not all do this? Wait. Are you comparing C and FORTH? I think a lot of this now makes sense.

                          There’s no documentation. Except for the ubiquitous comments. These interrupt the code, further reducing density, but rarely conveying useful insight.

                          You can use C with Doxygen or whatever. The library’s README or related documentation should cover using it. C lacks a good documentation system but really, so does a lot of the contemporaries. And at the end of the day, it’s not really the documentation system that exists but how the programmer uses it. You can write abysmal documentation in a language with an amazing documentation system.

                          Constants, particularly fields within a word, are named. Even if used, the name rarely provides enough information about the function. And requires continual cross-reference to the definition.

                          What’s the alternative here? Of course you need to know what a constant stands for to understand how it’s used. If I throw you the definition of F=MA, unless you’ve taken enough high-school physics to know that ‘F stands for Force, etc etc.’, ‘M stands for Mass which means […]’, ‘A stands for Acceleration which is […]’, then you’re going to be scuppered by the definition. This is a knowledge-transfer problem, a fundamental problem of grokking things, not a defect of any single programming language or dialect.

                          Preoccupation with contingencies. In a sense it’s admirable to consider all possibilities. But the ones that never occur are never even tested. For example, the only need for software reset is to recover from software problems.

                          Are they not? Is this not true for every language? Humans write tests, humans are fallible and might ignore your amazing tool to tell them how much code they’ve written tests for. If you make a tool that forces them to get 100% code coverage, they’ll just write the code to deal with less eventualities, so there is less code to test, which leads to shoddier code! It’s the same quandary with documentation. That’s not going into the fact that it’s a fallacy to test all of your code anyway (Although I agree you should aim for 100% coverage, ideally).

                          Names are short with a full semantic load.

                          It’s kind of funny that you talk about having to jump everywhere for definitions in C. Forth makes that worse because instead of being able to abstract things away, you have to essentially understand the entire codebase. Everything deals with the stack, and each forth word does not signal how much of the stack it deals with. Thus to understand one definition you have to understand how all definitions beneath that use the stack, this goes on until you are at the primitives that forth has given you. Forth seems actively hostile to abstraction. There are two facts of life: A) Any non-trivial program will have a large amount of words. B) Any given programmer can come up with a definition of a word that does not match the one in your mental model.

                          C has syntax to deal with that. It has comments, interfaces, types, and named parameters. I agree that maybe there are much better tools for the job, but here is where Forth does worse than C (No types to tell you that “carry” is an integer and not a double. No named parameters so there is no “carry”, and you don’t necessarily have the same definition of “add” that the original programmer did), and carries much of the same complaints that you listed earlier in the article!

                          Another difficulty is the mindset that code must be portable across platforms and compatible with earlier versions of hardware/software. This is nice, but the cost is incredible. Microsoft has based a whole industry on such compatibility.

                          Write to POSIX, and it’s supported everywhere. I’m not sure what you want the alternative to be here. Do you want software to not be compatible with different operating systems? Or different processor architectures? What?

                          1. 9

                            Sorry if you were mislead otherwise, but this wasn’t written by me - it was written by Charles “Chuck” H. Moore of Forth fame. A while ago, too.

                            1. 5

                              I think a lot of that article makes more sense if you consider embedded programming, where they don’t have POSIX and such (or even documented opcodes), which I hear FORTH is popular with

                              Console/handheld game development is an interesting case for non-portable code, too. You can write a game for a single platform, and get a better result by not trying to make it compatible with others. Or maybe target two or three, and ignore the infinite other possible machines you could make it compatible with

                              Those are also cases where maintenance is less of an issue. I haven’t written more than a trivial amount of FORTH, but I would not want to maintain it over a long period, because it looks like hell to refactor/rearchitect it after its written.

                              It looks like its strong suit is programs that you can write once, and throw out and write again when the hardware or the problem changes

                              I think the author is arguing that all/most programs are that kind. I don’t agree with that, but it’ll be good to have their voice in my head when I go to write some code that’s more general than it needs to be

                              1. 1

                                Forth makes that worse because instead of being able to abstract things away, you have to essentially understand the entire codebase. Everything deals with the stack, and each forth word does not signal how much of the stack it deals with. Thus to understand one definition you have to understand how all definitions beneath that use the stack, this goes on until you are at the primitives that forth has given you. Forth seems actively hostile to abstraction

                                I’ve often felt (without much familiarity with the language, admittedly)that forth’s ergonomics would be greatly improved by having words declare their stack effects as “function signatures” and have them checked. at the least I’d like to see a forth-like language that explored that idea, possibly with types as well. though maybe that goes fundamentally against the code-as-data model?

                                1. 3

                                  In the factor variant of forth does some static checking.

                                  1. 2

                                    One of the tenets of learning Factor is to take the mantra “Factor is not Forth” to heart.

                                    1. 2

                                      They missed a trick by not calling it FINF instead.

                                      1. 1

                                        No offense meant :) I admire factor from a distance.

                                    2. 2

                                      “Checking” them would be complicated (and pointless). They are however declared with stack comments in good code, i.e. : SQUARE ( n -- n) DUP * ;

                                      1. 1

                                        why would it be pointless? if i could declare

                                        * : num num -> num
                                        dup: 'a -> 'a 'a
                                        square: num num -> num
                                        

                                        that could be automatically checked for consistency and type safety without reducing the expressiveness or ease of use of the language.

                                        1. 2

                                          You have to add all the code to know when you’re doing a type check, you need to have code to handle all the different cases that are being analysed (including varargs), you need to be able to keep track of words that are already valid and called from a parent word lest you compute the safety every time (cache) - not sure how rdrop would work in this kind of type checked system either, since an rdrop isn’t a return at all - plus words that perform i/o being special cases…

                                          Worst of all, you have to add types. Just sounds like a lot of complexity for what could be gained by writing short, simple to follow definitions.

                                  1. 3

                                    After using Unity for ~9 years, I’ve come to a different conclusion on performance: When you’re further from the metal, you don’t get to stop thinking about the machine, but you do need to grab a 3 meter long screwdeiver sometimes

                                    I think this is what Jonathan Blow was talking about when he said you’d have to rewrite many of Unity’s systems if you wanted to make The Witness in it. You still need to understand how the hardware works if you want to have any hope of making your code fast, but you’re going to need to work around/replace parts of the higher-level engine/language, and sometimes that’s more work (or more annoying work. I think Blow has low tolerance for that kind of code) than doing everything in a lower level environment

                                    I’m very pro-Unity for most games. You’ll get to a prototype way faster, and the pain of making a few rube-goldberg machines to get it running fast enough is usually far less than writing a whole engine. If you want to be Id and work on bleeding edge rendering tech, it’s not going to be a good choice. But almost anything short of that, the code and time saved is worth the gross hacks