1. 24

    I didn’t submit the link but I did write the blog post, so AMA.

    1. 3

      Thank you for stepping in! Your involvement with community, presentations and just general hard work are worthy of envy.

      Three questions:

      In one of the recent talks, where Andrei and Walter was asked what are the ‘3 top thing’, Andrei mentioned full, unambiguous language specification. Where does it stand now, and where do you see it fit on the overeall scale of things.

      Second, the mobile platform. Is there community/sponsorship interest in having D being first-class language for Android, IOS (and may be the librem)

      Third, is in terms industry domain focus, are there specific domains/industries you would like to see more interest/sponsorship from?

      Overall, I am glad to see that memory safety (including compile time) and multi-language interoperability are high on your list/vision. Given D’s maturity, previous investments, current capabilities and market positions – those are right things, to focus on.

      1. 4

        Second, the mobile platform. Is there community/sponsorship interest in having D being first-class language for Android, IOS

        It depends on what exactly you mean by first-class, but there is sponsorship on it. I’ve been working on android last weekend and D with the NDK already works, just it is a little clunky. But the end result will be D works just as well as C++… which isn’t really first class there - only Java and Kotlin have that and tbh I don’t expect that to change given the android VM.

        I also toyed with ios, but am not officially sponsored on that yet. Actually, I think D has better chances there of working just the same as objc and swift.. but xcode doesn’t appear to be expandable so even compiling to the same code with the same access to the runtime might not count as first class.

        1. 2

          Xcode is not very customizable, but at least supports external build commands. It’s relatively easy to generate an Xcode project file with all the tweaks needed to make Build/Run “just work”, even for submission to the Mac AppStore. I’ve done that for Rust/Cargo: https://gitlab.com/kornelski/cargo-xcode

          1. 1

            Indeed, I’ll have to look at that, thanks! What I did for proof of concept on ios was just manually add the D static library. Worked very well in the simulator - the dmd compiler speaks objective-c abi and runtime, so even defining the child class just worked. But it only does x86 codegen… the arm compilers don’t speak obj-c. Yet, I am sure I can port it over in another weekend or two.

          2. 2

            D with the NDK already works, just it is a little clunky.

            Thank you Adam.

            Yes, I should not have used ‘first class’ moniker, as, clearly, for Android platform at least, first class can only be a JVM language.

            In the larger context, what I meant more was ‘easier’ business logic sharing among Android, iOS and backend. This is a challenge that seems to fit well with D’s multi-language interoperability vision. And it is a challenge that yearning to be solved. [1], [2].

            For many small-budget teams, developing Android + iOS (with a common backend), is quite difficult.

            So I was asking more around this angle, of integrating D into IDE/toolchains of the dominant and upcoming mobile platforms, to provide for this multi-mobile-platform+backend code sharing.

            JS/Typescript seems to be the choice, for the common engine run-time for code sharing across mobile (but, in my view, JS has costs in terms of: compile-time error detection, interlanguage data passing, memory and battery utilization, plus it is not a prevalent language for backends). There is also .NET Xamarin (that makes different tradeoffs than common JS approaches)

            [1] https://medium.com/ubique-innovation/sharing-code-between-ios-and-android-using-c-d5f6e361aa98 [2] https://news.ycombinator.com/item?id=20695806

            1. 6

              There’s a number of annoying problems to solve here (like Apple accepting LLVM bitcode instead of native code, which means you have to generate exactly the bitcode that the current Apple toolchain emits, and the the bitcode isn’t stable). Still, native languages are good choice there and having a multitude of native languages would be great.

              1.  

                generate exactly the bitcode

                whoa, how strict are the checks? (i have never done mobile development before, so I am not a great choice to be doing these projects… just it seemed i was the only one with compiler hacking experience with some free time so it kinda fell on me by default)

                I read the website and go the impression that they basically did end-user acceptance tests… so I thought if it ran the same way you’d prolly be OK… do they actually scan the code for such strict patterns too? I wouldn’t put it above Apple - I hate them - but it seems a bit crazy.

                1.  

                  They will actually compile the code and ship it to your clients. https://www.infoq.com/articles/ios-9-bitcode/

                  So, when I’m saying “exactly”, I mean it must be legal bitcode for the compiler toolchain you are using. This is a major nuisance, as the LLVM Apple ships with XCode is some internal branch. So, basically the only option is building custom compiler against XCode.

                  For an effort in Rust to do this, check here. https://github.com/getditto/rust-bitcode

                  I’m not well-versed in D, I assume the compiler is not based on LLVM?

                  1.  

                    D has 3 backends

                    gcc, its own, and LLVM (experimental) [1]

                    I actually think D fits well into this problem domain of ‘sharing business logic (non-UI)’ code across multiple languages and toolchains of the mobile dev world.

                    Because D’s team invested big effort into multi-backend architectures, and into C++ abi compatibility across non-standard ABI of the C++ compilers

                    [1] https://dlang.org/download.html

                    1.  

                      I wouldn’t call the llvm one experimental, it is in excellent condition and has been for a long time now. But yeah the llvm one is what would surely do the prod builds for ios… I guess it just needs to be built against the xcode version then hopefully it will work.

                      1.  

                        ok, thank you. I had an old understanding of D’s llvm backend. sorry about that.

                    2.  

                      FWIW, bitcode submission is optional and doesn’t seem to have compelling benefits. I’m a full-time iOS developer and have disabled bitcode in all of my recent projects.

                      1.  

                        Yes, but forcing individual decisions of that kind is not a good habit if you want adoption.

            2.  

              Thanks for the kind words!

              unambiguous language specification. Where does it stand now

              I think we’re inching towards it.

              are there specific domains/industries you would like to see more interest/sponsorship from?

              All of them? :P

            3.  

              I’m watching Zig’s progress, and it seems like it’s more minimalistic and modern (as in: type declarations, expression-based rather than statement based, quasi sum types with tagged unions) than D while competing in the “crazy compile time magic” department. Do you have an opinion on whether D could learn from Zig as well?

              1.  

                I don’t have an opinion on that because I know next to nothing about Zig other than they don’t like operator overloading. Which dismisses it as a language I’d like to actually use.

              2. 2

                I don’t understand the first point. Are we finally gonna have a good answer for all of the “but the GC…” protestors? Or are you just saying that the GC isn’t enough to ensure memory safety?

                1.  

                  The GC is enough for memory safety for heap allocated memory, but not for the stack.

                  As for “but the GC…” protestors, it’s a hard issue since it involves changing the current psychological frame that believes the GC is magically slow.

                  1.  

                    Random pre-coffee thoughts on this sort of stuff… In my experience it also involves changing the current psychological frame of GC implementors (and users) that believes speed is the problem. I write video games and robotics stuff, both of which are soft-real-time applications. Making them faster can just be a matter of throwing better algorithms or beefier hardware at them, but even a 10ms pause at some random time for some reason I can’t control is not acceptable.

                    I would love to use a GC’ed language for these tasks but what I need is control. So if I’m learning a new language for it I need a more powerful API for talking to the GC than “do major collection” and “do minor collection” which seems sufficient for most GC writers. (Rust has made me stop much paying attention to GC’ed languages, so more powerful API’s seem to be a bit more common than last time I checked a few years ago though.) I also need documentation on how to write critical code that will not call the GC. Actually, now that I look at D’s GC API it looks a lot better than most for this task; you can globally enable/disable the darn thing, and API docs both describe the algorithm and how it’s triggered. So, writing something like a fast-paced video game in D without erratic framerates due to GC stalls seems like it shouldn’t actually be too hard.

                    So, changing the current psychological frame of the “but the GC…” people might be started by demonstrating how you write effective code that uses the GC only where convenient. That way the people who actually need to solve a problem have an easy roadmap, and the people who complain on philosophical ground look silly when someone with more experience then them says “oh I did what you say is infeasible, it was pretty easy and works well”.

                    I dunno, changing people’s minds is hard.

                    1.  

                      I would love to use a GC’ed language for these tasks but what I need is control.

                      As you wrote later, there are API calls to control when to collect. And there’s always the option to not allocate on the GC heap at all and to use custom allocators.

                      I dunno, changing people’s minds is hard.

                      Yep. “GC is slow” is so ingrained I don’t know what to do about it. It’s similar to how a lot of people believe that code written in C is magically faster despite that defying logic, history, or benchmarks.

                2.  

                  One can write the production code in D and have libraries automagically make that code callable from other languages.

                  I think it can be big for Python. Given that Python is used in stats, are there any libraries in D that to stats?

                  1.  

                    I think the mir libraries cover that to a certain degree.

                    1.  

                      just as Atila mentioned, here are referneces

                      https://github.com/libmir

                      https://code.dlang.org/search?q=linear

                  1. 13

                    Some code is unequivocally bad. Maybe not most, but some is. Examples I’ve seen in real life:

                    • A function spanning 30k SLOC
                    • A function with 48 parameters
                    • #define PROJECT_NAME_TWO 2 (I guess cos someone heard magic numbers are bad???)
                    • A hand-coded OOP system in C with two vtable pointers. Because reasons.
                    • A modified forked clang that did type inference in C (yes, C) but only for certain types in the company’s code. Never mind that clang is a C++ compiler that could already do type inference. It also made clang crash, which I’d never seen before and haven’t seen since.

                    Unreasonable constraints unfortunately happen, and someone somewhere was just trying to do their job. But it’s still bad code and there really is no excuse for any of the above. We don’t make excuses for road workers when we go over a pothole.

                    1. 8

                      I think the point of the article is not to deny that the code is bad, but to avoid saying that it’s bad and instead explain what’s wrong with it, and even ask the author for clarification about why they wrote it that way.

                      1. 4

                        Never mind that clang is a C++ compiler that could already do type inference

                        To be fair, they couldn’t just switch Clang to C++ mode and use that to compile their C-plus-auto code. Even though it’s popular to say C++ is a superset of C, it isn’t, and most C code (even most C99 or C89 code) won’t compile in a C++ compiler.

                        So you can kind of see how it makes some kind of twisted sense to patch Clang if you want to write C but also want type inference…

                        1. 2

                          All you wrote is true.

                          I’d have to go into detail about how bad this idea was though. It was one of the dumbest things I’ve ever seen.

                      1. 4

                        Rust can do that too :)

                        $ cat variant.rs &&  rustc --crate-type=staticlib -C opt-level=2 variant.rs && objdump -d libvariant.a | grep \<do_something\> -A 4
                        pub enum ExampleSumType { T0, T1, T2, T3, T4, T5, T6, T7, T8, T9 }
                        #[no_mangle]
                        pub fn do_something(s: ExampleSumType) -> i32 {
                            use ExampleSumType::*;
                            match s {
                            T0 => 3,
                            T1 => 5,
                            T2 => 8,
                            T3 => 13,
                            T4 => 21,
                            T5 => 34,
                            T6 => 55,
                            T7 => 89,
                            T8 => 144,
                            T9 => 233,
                            }
                        }
                        0000000000000000 <do_something>:
                           0:	48 0f be c7          	movsbq %dil,%rax
                           4:	48 8d 0d 00 00 00 00 	lea    0x0(%rip),%rcx        # b <do_something+0xb>
                           b:	8b 04 81             	mov    (%rcx,%rax,4),%eax
                           e:	c3                   	retq
                        
                        1. 1

                          Rust has built-in support for sum types in the language though.

                        1. 3

                          What do system programmers (non-managers) think about TDD? Functional style programming, TDD, and unit testing all go together. Often web and app people are the cheerleaders of all three. The more functional the code, the less complicated it is to mock side-effects when doing TDD. System-programming tends to be more procedural. And often shared data is mutated between hungry processes to reduce memory overhead and latency. How do you TDD (write unit tests first, always) something like that with so many side-effects and corner-cases? The mocking cost must be really high and you still can’t catch everything, no?

                          1. 2

                            I do systems programming and use TDD 99% of the time with no issue.

                            1. 1

                              Does TDD push you to minimize side-effects in as many methods as possible? And are you doing data-intensive stuff with mutexes, locks, and semaphores?

                              1. 2

                                Does TDD push you to minimize side-effects in as many methods as possible

                                I minimise side-effects as a rule anyway. Any technique I’d use would result in hardly any side-effects.

                                with mutexes, locks, and semaphores

                                Why would I want to write code that explicitly locks mutexes? To me that’s like writing assembly on purpose.

                          1. 2

                            From one of the comments below the article:

                            To me this sounds like you’ve completely missed the point of TDD.

                            This is the same back-and-forth navel-gazing that’s seemingly been the focus of the “software craftsmanship” community for at least the past five years.

                            TDD is great and all, but when are we going to get over ourselves?

                            1. 8

                              I encourage you to first start with the mentality “TDD is always viable” and try to pick out flaws in your process that are hindering it.

                              Oh my.

                              1. 1

                                I’m not sure there is a problem here…..

                                There has always since the XP days been the notion of an Architectural Spike.

                                All seat belts off, no tests, coding standards out the door, cowboy to the max…. get the info you need to understand the external system / framework / Tool whatever.

                                Then throw it away and do it right.

                                Then, in this case, you TDD collaboration tests..

                                ie. You test that your code and the external system agree on the interface.

                                ie. You test your can handle everything the external system throws at you, and you can handle every error condition the external system may return.

                                1. 7

                                  When people say “you’re not doing TDD right”, they mean a specific interpretation of TDD: the strict minimalist red-green-refactor as a form of design. “Test everything” or even “write tests before your code” don’t “count” as “real TDD”. So saying “TDD is always viable” is saying “this specific set of rigid rules is always a good choice”, which is… bad.

                                  1. 1

                                    In the category of “Throwing a Mental Grenade into the Conversation and running away giggling…”

                                    Kent Beck’s latest adventure is… test && commit || revert

                                    https://increment.com/testing/testing-the-boundaries-of-collaboration/

                                    Which actually might be a fun experiment to try.

                              2. 5

                                Every time I see a criticism of TDD the response is “you don’t understand TDD” or “you’re doing TDD wrong”, to the point where I’m wondering what the hell TDD even is.

                                1. 8

                                  Dismissing criticism with a hand-wavy “you are doing it wrong” seems to be popular in the Scrum/Agile circles too. It reminds me a lot of https://yourlogicalfallacyis.com/no-true-scotsman

                                  1. 2

                                    I’ve done a lot of TDD, so I’d say it’s:

                                    1. Write a test that fails (either for a feature that doesn’t exist yet or that reproduces a bug)
                                    2. Write minimal code that makes the test pass, no matter how ugly or WET
                                    3. Refactor (this is where design happens)

                                    Other than that the only other rule is to not write production code without a test to motivate/cover it.

                                    In my experience people dismiss seeing the test fail, but it’s extremely important.

                                    1. 3

                                      In my experience people dismiss seeing the test fail, but it’s extremely important.

                                      This to me is one of the two truly valuable things you gain from TDD: if you always make the test fail first, you never fall into that nasty trap of having both buggy code and a buggy test, which makes debugging the thing at midnight when the page comes in so much worse. It’s so tempting and easy to see a bug report come in and think, “I know what caused that,” and change the production code. Now maybe you also write a test, but the test passing doesn’t necessarily tell you much. Even if you do things in this order, commenting out your changes once you write the test (to prove that it can fail) gives you the same benefit. Extremely useful, IME.

                                      The other main benefit I think is a tendency to write functions/methods which return values, rather than mutate state or produce side effects, which is of course not limited to TDD but in some idioms that are lax about side effects/mutability, TDD can help you avoid the temptation to just mutate state all over in eg a big Rails controller method or something.

                                      These 2 things I have personally found very valuable in my career; all the other dogmatizing I find sensationalized.

                                      1. 2

                                        if you always make the test fail first, you never fall into that nasty trap of having both buggy code and a buggy test

                                        Precisely. As Sean Parent said in his CppCon 2019 talk, it’s like double-entry bookkeping in accounting.

                                        These 2 things I have personally found very valuable in my career; all the other dogmatizing I find sensationalized.

                                        Atila’s rule #0 of programming: use your head. I mostly TDD, but sometimes I don’t. It’s always about trade-offs.

                                  2. 4

                                    TDD is basically 1% of software developers aggressively shouting that the other 99% are wrong, stupid, incompetent, and “may wind up fired and in jail” for not using TDD 100% of the time.

                                  1. 3

                                    TL;DR: The generic syntax from D, with Rust’s traits renamed as contracts.

                                    1. 1

                                      There’s a great D project called pegged where you write the PEG grammar as a string and the D compiler produces a parser for it!

                                      1. 11

                                        Learn it? Yes. Use it? No.

                                        Last but not least, because C is so “low-level”, you can leverage it to write highly performant code to squeeze out CPU when performance is critical in some scenarios.

                                        This is true for every systems programming language in existence and is frequently easier to do in other languages.

                                        1. 1

                                          This. The article makes a good argument that you should be able to read C so you can look at the implementation of Unix tools. There is no good argument for writing C in the article.

                                          1. 1

                                            The problem is that people tend to have limited capacity for remembering things, so they use what they learn. (Or, rather, swiftly un-learn what they never use.) Therefore, an argument for learning X is often the same as an argument for using X.

                                          2. 1

                                            What are some examples of high-performance code in other systems programming languages?

                                            I notice a distinct lack of, say, large-scale number crunching outside of Fortran and C.

                                            1. 1

                                              Ada and Rust come to mind. Ada’s used in time- and performance-critical applications in aerospace. Rust’s metaprogramming even lets it use multicore, GPU’s, etc better. D supports unsafe if GC is slow. I think Nim does, too, with it compiling to C. People use those for performance-sensitive apps. Those would be the main contenders.

                                              One I have no data on is Amiga-E which folks wrote many programs for. On Lisp/Scheme side, PreScheme was basically for making “a C or assembly-level program in Lisp syntax” that compiled to C. It didn’t need any of the performance-damaging features of Lisp like GC’s or tail recursion. Probably comparable to C programs in speed.

                                              So, there’s a few.

                                              1. 1

                                                What are some examples of high-performance code in other systems programming languages?

                                                Pretty much anything written in anything. C isn’t magically fast and it’s easy to match or beat it in C++, Rust, D, Nim, …

                                                I notice a distinct lack of, say, large-scale number crunching outside of Fortran and C.

                                                Fortran, sure. But C? I have a feeling that C++ is much more used for that. CERN basically runs on the stuff. Fortran has the pointer aliasing advantage, but again, any language with templates/generics will generate code that’s just as fast.

                                            1. 12

                                              I don’t work in Go, but I’ve looked at it just for general knowledge. I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language except perhaps ACL. Every language feature, even those that are “just missing” such as exceptions in Go, is a point of contention. We learn the idiosyncrasies of each language and devise patterns (or frameworks) to work around them.

                                              1. 12

                                                I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language

                                                I imagine the author would be happy to work around a few small issues. After all, no language is perfect. However, pile up too many small irritations and you end up with a language which people don’t want to use, and this is a list of the myriad large and small ‘idiosyncrasies’ which have put this person off using Go.

                                                1. 0

                                                  However, pile up too many small irritations and you end up with a language which people don’t want to use

                                                  The fact that many important production/ops codebases (Hashi stack, Kubernetes, Docker, etc.) are written in Go has to be evidence that Go’s “pile of irritations” doesn’t overshadow its benefits.

                                                  1. 7

                                                    How irritating things are is subjective to each user.

                                                2. 10

                                                  I think you’re slightly misrepresenting the point about exceptions. The issue isn’t that Go doesn’t have exceptions, but that Go doesn’t have any system for making sure you don’t accidentally ignore errors. If Go either had noisy exceptions which blow up your program unless you catch them, or if it produced an error when you don’t assign a function’s error return value to a variable, or if it had some other clever solution, it wouldn’t have been an issue. It’s just ignoring errors by default that’s problematic.

                                                  In general I mostly agree with you though, that there are idiosyncrasies in all languages, and you just have to learn to live with them. I have written a bit of Go code myself (some professionally), and it’s fairly nice to work with in spite of its flaws. Being someone who mostly writes C, using interface{} all over the place instead of generics doesn’t even feel wrong.

                                                  My biggest complaint about Go would probably be how it adds a reference all over the source code on where a package happens to be hosted. I’m also not a big fan of the GOPATH stuff, but that’s being phased out (though the transition is a bit weird, where I find some commands require my package to be inside of GOPATH, while others require my package to not be inside of GOPATH).

                                                  1. 6

                                                    I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language except perhaps ACL.

                                                    I agree. Furthermore, these points have been raised in articles since Go was publicly announced. Each of these points are well-known, there is no use in hearing them the umpteenth time,

                                                    Secondly, some of these points are really in the eye of the beholder, and a Go programmer would see them as strengths. E.g. (not necessarily my opinion):

                                                    Go uses capitalization to determine identifier visibility.

                                                    Great. Now I don’t have to look at the definition to know its visibility.

                                                    Structs do not explicitly declare which interfaces they implement. It’s free to do that – it never promised anything.

                                                    Which is nice, because one can make ad-hoc interfaces that the author of the package did not define. If the author wants to make a guarantee, they could assert that :

                                                    // foo.go
                                                    
                                                    type Foo struct { //... }
                                                    
                                                    type Bar interface { //... }
                                                    
                                                    var _ Bar = NewFoo()
                                                    

                                                    There’s no ternary (?:) operator.

                                                    (Over)use of the ternary operator leads to unreadable code.

                                                    1. 6

                                                      Which is nice, because one can make ad-hoc interfaces that the author of the package did not define

                                                      Yes. This is called structural typing and we’ve had it since at least C++ added templates. I don’t think the article argues against this, it brings up problems with it. My issue with it in most implementations (that I know of, every one except for Haskell and Rust) is that when you want to implement an interface but haven’t because of a programming error. I’d like the compiler to tell me what I did wrong. To me, that’s the advantage of explicitly declaring an interface/trait/type class and akin to declaring variables before use.

                                                      (Over)use of the ternary operator leads to unreadable code.

                                                      Overuse of anything, including goroutines, leads to unreadable code. That’s not an argument for or against any feature. I thought the article made a perfectly good case for how the ternary operator makes the code more readable.

                                                      1. 3

                                                        My issue with it in most implementations (that I know of, every one except for Haskell and Rust)

                                                        Haskell and Rust do not use structural typing. Both are nominative type systems, since traits/type classes are explicitly implemented for named types.

                                                        that when you want to implement an interface but haven’t because of a programming error. I’d like the compiler to tell me what I did wrong.

                                                        As my example showed, you can do this in Go as well. When I used Go (as a stop gap between not wanting to go back to C++ and until Rust 1.0 was released), I used this approach to assert that types implement the interfaces that I wanted them to.

                                                        Go’s approach has different problems – you can’t implement interface methods externally for a data type that is not under your control. E.g. in Rust or Haskell, one can define a trait/type class and implement it for various ‘external’ types. In such cases, you have to wrap the data type in Go, so that you can define your own methods.

                                                        Overuse of anything, including goroutines, leads to unreadable code. That’s not an argument for or against any feature.

                                                        Not all language constructs are equal in obfuscating readability. Many modern languages choose to omit the ternary operator (e.g. Rust), because their designers believe that it leads to worse understandably.

                                                        1. 2

                                                          Rust has a ternary operator. It’s not spelled ? :, but since every if statement is an expression, it’s there: https://doc.rust-lang.org/reference/expressions/if-expr.html#if-expressions

                                                          1. 1

                                                            I know that if is an expression in Rust, use it daily :). And still people ask for ? :, because it is more terse. But a nested if expression is much easier to read than nested use of the ternary ? : operator.

                                                      2. 1

                                                        While the structural typing thing is (IMO) not a problem (it means you need to write your code in a style suited to structural typing rather than nominative typing, which everybody used to duck typing already does), other points made here are both new to me (as someone who has not written any non-trivial go code but has been casually following it since its release) and sort of shocking.

                                                        Capitalization for identifier visibility is, as you mentioned, sometimes a time-saver when reading code, but as OP mentioned, both less expressive than multi-tier scoping rules & a potential source of enormous diffs when refactoring. This is a case where, fairly unambiguously, something that seems like a clever idea on first blush has massive knock-on effects that large projects need to work around & that affect iteration speed (because you sometimes need to search & replace identifiers in a whole module, & if you’re not careful to write your code in a style where private identifiers that you think might need to become public in the future are rarely referenced directly, you’re liable to have a diff on nearly every line). I’m generally in favor of capitalization having semantic meaning (I like how, in prolog-influenced languages, capitalization distinguishes identifiers from atoms), but nothing indicated by part of an identifier name ought to be something that changes during implementation unless most uses of that identifier will also already need to change to match.

                                                        The absence of a ternary operator is sort of shocking. It’s not the compiler’s job to paternalistically enforce good style, and using the ternary operator judiciously is often better style than avoiding it entirely (especially when what is conceptually a single expression – say, an assignment – needs a default case). Languages that lack a ternary operator tend to support || in expressions in order to handle the most common case – handling defaults in the case of a null or nullish value. Leaving out the ternary operator feels philosophically out of line in go, and more in line with paternalistic languages like Java. (Although Java has a ternary operator, it does all sorts of other things at the compiler level to enforce an idea of ‘good style’ that, because it lacks nuance, often makes code substantially worse & harder to read.)

                                                        Go’s error handling (or lack thereof) is familiar & has been covered before, so I don’t think it surprises anybody. It’s basically like C’s. OP seems to mention it for the sake of completeness – since, yes, it’s easier to silently ignore errors when you don’t have an exception-like sytem, and the tendency of C code to accidentally ignore important errors is precisely why most modern languages have exceptions.

                                                    1. 6

                                                      Thank you engagement @atilaneves !

                                                      3 questions/topics from my side

                                                      a) can BetterC be used to link/leverage large C++ libraries (eg QT or boost). That is, can BetterC be used as essentially C++ replacement (and without D’s GC, D’s standard library (Phobos), and any other D-run time dependencies) ? For example, can I build a QT or wxWidget based app for FreeBSD, Linux, Windows, MacOS using BetterC and QT only?

                                                      b) Can you describe for, us, non-D folks, the DIP1000 (this seems to be a feature implementing Rust-like semantic for pointers… but it compare/contrast was not clear)

                                                      c) Mobile app development – does D have roadmap/production ready capabilities in this area, and for which platforms

                                                      Thank you again for your time.

                                                      1. 5

                                                        I don’t see how betterC helps with calling C++. D can already call C++ now, it’s just not easy, especially if they’re heavily templated.

                                                        DIP1000 is D’s answer to Rust’s borrow checker. You can read the dip here. Essentially it makes it so you can’t escape pointers.

                                                        There’s been some work done for Android, but the person doing that left the community. It was possible to run D there, but I’m not sure what the current status is.

                                                        1. 3

                                                          Thank you.

                                                          WRT C++ D compatibility, I watched a video for this paper https://www.walterbright.com/cppint.pdf but, if I remember right, it was 2015 – and I could not figure out if, after D was officially included in GCC, there were any updates to C++ ABI compatibility feature.

                                                          1. 1

                                                            The ABI should just work. Otherwise it’s a bug.

                                                      1. 8

                                                        The talk covers the psychology of language adoption, how C++ conquered the world and how D can learn/shamelessly streal from it.

                                                        OP and speaker here. AMA.

                                                        1. 15

                                                          In a way this is why I use Go. I like the fact that not every feature that could be implemented is implemented. I think there’s better languages if you’d want that. Don’t use a language either just because it is by Google.

                                                          Also I think that it is actually more the core team, than Google. I think that Go if it was the company would be much different, than we have now. It would probably look more like Java or Dart.

                                                          One needs to see the context. Go is by people with a philosophy in the realms of “more is less” and “keep it simple”, so community wise closer to Plan9, suckless, cat-v, OpenBSD, etc. That is people taking pride in not creating something for everyone.

                                                          However unlike the above the language was hyped a lot, especially because it is by Google and especially because it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                                                          I think generics are just the most prominent example of “why can’t I have this?”. Compare it with the communities mentioned above. Various suckless software, Plan9, OpenBSD. If somehow all the Linux people would be thrown onto OpenBSD a lot of them would probably scream to Theo about how they want this and that and there would probably be some major thing they “cannot have”.

                                                          While I don’t disagree with “Go is owned by Google” I think on the design side (and generics are a part of that) I’d say it’s owned by a core team with mostly aligned ideas. While I also think that Google certainly has a bigger say even on the design side than the rest of the world I think the same constellation of authors, independently of Google would have lead to a similar language, with probably way fewer users, available libraries and I also don’t think Docker and other projects would have picked it up, at least not that early.

                                                          Of course there’s other things such as easy concurrency that could have played a role in adoption, but Go would probably have had a lot of downsides. It probably would have a lot less performance improvements and slower garbage collection, because I don’t think their would be many people working so much in that area.

                                                          So to sum it up. While Google probably has a lot of say, I don’t think that is the reason for not having generics. Maybe it is even that Go doesn’t have generics (yet) despite Google. After all they are a company where a large part of the developers have generics in their day to day programming language.

                                                          EDIT: Given their needs I could imagine that Google for example was the (initial) cause for type aliases. I could be wrong of course.

                                                          1. 8

                                                            it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                                                            Personally, I think this had a lot to do with historical context. There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time. I think there were a lot of people suffering from “interpreter fatigue” (I’ve read several times that Python developers flocked to Go early on, for example). So I think that, for quite a few people, Go is just the least undesirable option, which helps explain why everyone has something they want it to do differently.

                                                            Speaking for myself, I dislike several of the design decisions that went into Go, but I use it regularly because for the things it’s good at, it’s really, really good.

                                                            1. 5

                                                              There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time.

                                                              Have you looked at “another language”, and if so, what are your thoughts?

                                                              1. 4

                                                                Not a whole lot. My superficial impression has been that it is pretty complicated and would require a pretty substantial effort to reach proficiency. That isn’t necessarily a bad thing, but it kept me from learning it in my spare time. I could be totally wrong, of course.

                                                              2. 4

                                                                There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time

                                                                D has both.

                                                                1. 2

                                                                  I completely agree with your statement regarding the benefits and that this is certainly a reason to switch to Go.

                                                                  That comment wasn’t meant to say that there is no reason to pick up Go, but more that despite the benefits you mentioned if there wasn’t a big company like Google backing it, it might have gone unnoticed or at least other companies would have waited longer with adopting it, meaning that I find it unlikely it would be where it is today.

                                                                  What I mean is that a certain hype and a big company behind is a factor of this being “a good option” for much more people, especially when arguing for a relatively young language “not even having classes and generics” and a fairly primitive/simple garbage collector in the beginning.

                                                                  Said communities tend to value these benefits much higher than the average and align very well in terms of what people emphasized. But of course it’s not like one could be sure what would have happened and I am also drifting off a bit.

                                                              1. 3

                                                                Having to work to tell a computer what it already knows is one of my pet peeves.

                                                                A type is not only for the computer, it’s also for the human reading the source. The more you leave to the computer to workout on its own, the more the human reading you will have to hunt for that now hidden information.

                                                                I also believe that wanting to know the exact type of a variable is a, for lack of a better term, “development smell”, especially in typed languages with generics.

                                                                Why?

                                                                I think that the possible operations on a type are what matters, and figuring out the exact type if needed is a tooling problem.

                                                                Yes, tooling can overcome the issue described above (will have to hunt for that now hidden information), but it is not explaining why it is preferrable. What is gained by doing so?

                                                                1. 6

                                                                  A type is not only for the computer, it’s also for the human reading the source

                                                                  It depends. It doesn’t really matter if myfunc returns a std::vector or std::list if all I’m doing is filtering the results. It does matter however that the type has .cbegin() and .cend() iterators.

                                                                  Why?

                                                                  Because of what I wrote just afterwards: “I think that the possible operations on a type are what matters”.

                                                                  but it is not explaining why it is preferrable. What is gained by doing so?

                                                                  Personally, I hardly ever care what type something is unless it’s terribly named. Otherwise it’s getting passed to another function/algorithm anyway. Even if I did know the type, I’d more likely than not have to jump to its definition to find out / remember what it is and what I can do with it.

                                                                  As for what is gained: refactoring. With auto I don’t have to change all the variable declarations. There’s also the avoidance of implicit conversions.

                                                                  1. 1

                                                                    If the code-base is already good enough, that all implementations of an interface are interchangeable (no side-effect specific to a type that one would have to worry about), wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?

                                                                    You write that you wonder why this is a debate at all. But then, type inference is a “heavy” feature in a language and reduce code legibility. It has to demonstrate a clear advantage for such debate to be settled.

                                                                    C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference. There is still no clear advantage to having it (I mean, it won’t really come into consideration when choosing a language to work with).

                                                                    1. 3

                                                                      Here’s a way to settle the debate: remove local type inference from a language that has it, and see how users react.

                                                                      Of course, you’ll have to find a language designer willing to do such a thing, but then you’ll see if e.g. legibility is actually an issue: if people complain that adding local type inference reduces legibility (as you say), and just as many people complain that removing it also reduces legibility, then you may be able to make a claim as to whether “legibility” is a subjective property.

                                                                      1. 2

                                                                        wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?

                                                                        Even so, who wants large diffs for no reason?

                                                                        But then, type inference is a “heavy” feature in a language and reduce code legibility

                                                                        Which seems to be the point of the people who prompted me to write the post in the first place. I understand that that’s your opinion, and it’s one I don’t agree with. What’s odd is that I’m not the only one - there are many languages where this debate doesn’t seem to happen.

                                                                        C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference

                                                                        It depends on your definition of bloated. Would C be bloated if all we did was add type inference to it?

                                                                  1. 5

                                                                    Huge congratulations!

                                                                    Some questions I managed to remember from the time when I tried, uhmh, to write my own build utility and discovered, as many folks do, that it’s waaay harder than it looks:

                                                                    • (how) does it handle auto-detection of .h files included from a .c file?
                                                                    • does it detect removed files to discover a file needs rebuilding? e.g. if foo.c has #include <bar.h>, and after compilation bar.h file is removed from disk, does it try to rebuild foo.c (and reports an error)?
                                                                    • how can I add extra compilation flags: to some files in the project? to all files in the project? to some linked libraries?
                                                                    • would it be able to correctly support Java files “intelligently”? taking into account that compiling a .java file can emit >1 .class files? (e.g. because of inner classes)
                                                                    1. 3

                                                                      Normally, dependencies are determined by the compiler. That’s how CMake and ninja do it.

                                                                      1. 2

                                                                        Yes, but there are tricky situations, and each of CMake and ninja has to either have a way to handle them in some way, or risk missing them. I.e. “corner cases” to include in every respectable build system’s test case. The ”…[#include-d] bar.h file is removed from disk …” scenario, that I tried to describe tersely above, is one such scenario. Java source files, with some of them emitting 2+ .class files from one .java file, are another tricky case. If I learn of a build system that demonstrably handles both those cases correctly, in an automated way (without having to resolve those situations/files by hand), I’d be immediately interested in it. Don’t know of such a system yet, that wouldn’t be also a huge monstrosity like maven or something.

                                                                      2. 2

                                                                        Xmake has dealt with these issues.

                                                                        1. 1

                                                                          Can you explain a bit more how is this handled in xmake? I am really curious to undrestand!

                                                                          1. 7

                                                                            (how) does it handle auto-detection of .h files included from a .c file?

                                                                            xmake will add -H flags to get all dependent .h files for gcc when compiling *.c files and check their modified time.

                                                                            does it detect removed files to discover a file needs rebuilding? e.g. if foo.c has #include <bar.h>, and after compilation bar.h file is removed from disk, does it try to rebuild foo.c (and reports an error)?

                                                                            xmake also will cache all dependent .h file list to the dependence files (*.d) to check them when building.

                                                                            how can I add extra compilation flags: to some files in the project?

                                                                            add_files("src/test.c", {cflags = "-Dxxx", defines = "yyy"})
                                                                            

                                                                            to all files in the project?

                                                                            add_cflags("-Dxxx") -- add flags to root scope for all targets
                                                                            target("test")
                                                                                add_files("src/*.c")
                                                                            
                                                                            target("test2")
                                                                                add_files("src2/*.c")
                                                                                add_cflags("-Dyyy")  -- only for target test2
                                                                            

                                                                            to some linked libraries?

                                                                            target("test")
                                                                                add_files("src/*.c")
                                                                                add_links("pthread")
                                                                                add_linkdirs("xxx")
                                                                            

                                                                            would it be able to correctly support Java files “intelligently”? taking into account that compiling a .java file can emit >1 .class files? (e.g. because of inner classes)

                                                                            Does not support for java now.

                                                                        2. 1

                                                                          I write C and C++ but I use GNU Make.

                                                                          For question #1 and #2, can’t you just add -MMD -MP -MT compiler switches to gcc and clang? (I have no idea if Visual C++ supports these since I don’t use that compiler and just use MinGW on Windows), then in your makefile just include the .d files generated.

                                                                          1. 2

                                                                            Yes you can do that, but supporting MSVC is very critical. Majority of developers use Windows, so if your support for Windows isn’t first rate, it greatly reduces your probability of success.

                                                                            1. 2

                                                                              We can add -showIncludes flags to get include dependences and generate .d files for MSVC when building files. @akavel

                                                                              1. 1

                                                                                I seem to recall that it was exactly the .d files + make which had trouble detecting a removed .h file. Though I may be wrong, it’s been a long time. Still, given the trickiness, unless I see an explicit test case…

                                                                            1. 34

                                                                              Build systems are hard because building software is complicated.

                                                                              Maybe it’s the first commit in brand new repository and all you have is foo.c in there. Why am I telling the compiler what to build? What else would it build??

                                                                              Compilers should not be the build system, their job is to compile. We have abstractions, layers, and separation of concerns for a reason. Some of those reasons are explained in http://www.catb.org/~esr/writings/taoup/html/ch01s06.html. But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                                                                              The good news is that for trivial projects, writing your own build system is likewise trivial as well. You could do it in a few lines of bash if you want. The author did it in 8 lines of Make but still thinks that’s too hard? I mean, this is like buying a bicycle to get you all around town and then complaining that you have stop once a month and spend 5 minutes cleaning and greasing the chain. Everyone just looks at you and says, “Yes? And?”

                                                                              1. 5

                                                                                The author could have done it in two if he knew Make. And no lines if he just has a single file project. One of the more complex projects I have uses only 50 lines of Make, with 6 lines (one implicit rule, and 5 targets) doing the actual build (the rest are various defines).

                                                                                1. 3

                                                                                  What are the two lines?

                                                                                  1. 4

                                                                                    I’m unsure what the two lines could be, but for no lines I think spc476 is talking about using implicit rules (http://www.delorie.com/gnu/docs/make/make_101.html) and just calling “make foo”

                                                                                    1. 2

                                                                                      I tried writing it with implicit rules. Unless I missed something, they only kick in if the source files and the object files are in the same directory. If I’m wrong, please enlighten me. I mentioned the build directory for a reason.

                                                                                      1. 2

                                                                                        Right, the no lines situation only applies for the single file project setup. I don’t know what are the 2 lines for the example given in the post.

                                                                                    2. 3

                                                                                      First off, it would build the executable in the same location as the source files. Sadly, I eventually gave up on a separate build directory to simplify the makefile. So with that out of the way:

                                                                                      CFLAGS ?= -Iinclude -Wall -Wextra -Werror -g
                                                                                      src/foo: $(patsubst %.c,%.o,$(wildcard src/*.c))
                                                                                      

                                                                                      If you want dependencies, then four lines would suffice—the two above plus these two (and I’m using GNUMake if that isn’t apparent):

                                                                                      .PHONY: depend
                                                                                      depend:
                                                                                          makedepend -Y -- $(CFLAGS) -- $(wildcard src/*.c) 
                                                                                      

                                                                                      The target depend will modify the makefile with the proper dependencies for the source files. Okay, make that GNUMake and makedepend.

                                                                                    3. 1

                                                                                      Structure:

                                                                                      .
                                                                                      ├── Makefile
                                                                                      ├── include
                                                                                      │   └── foo.h
                                                                                      └── src
                                                                                          ├── foo.c
                                                                                          └── prog.c
                                                                                      

                                                                                      Makefile:

                                                                                      CFLAGS = -Iinclude
                                                                                      VPATH = src:include
                                                                                      
                                                                                      prog: prog.c foo.o
                                                                                      foo.o: foo.c foo.h
                                                                                      

                                                                                      Build it:

                                                                                      $ make
                                                                                      cc -Iinclude   -c -o foo.o src/foo.c
                                                                                      cc -Iinclude    src/prog.c foo.o   -o prog
                                                                                      
                                                                                      1. 1

                                                                                        Could you please post said two lines? Thanks.

                                                                                        1. 4

                                                                                          make could totally handle this project with a single line actually:

                                                                                          foo: foo.c main.c foo.h
                                                                                          

                                                                                          That’s more than enough to build the project (replace .c with .o if you want the object files to be generated). Having subdirectories would make it more complex indeed, but for building simple project, we can use a simple organisation! Implicit rules are made for a case where source and include files are in the same directory as the Makefile. Now we could argue wether or not it’s a good practice or not. Maybe make should have implicit rules hardcoded for src/, include/ and build/ directories. Maybe not.

                                                                                          In your post you say that Pony does it the good way by having the compiler be the build system, and build project in a simple way by default Maybe ponyc is aware of directories like src/ and include/, and that could be an improvement over make here. But that doesn’t make its build system simple. When you go the the ponylang website, you find links to “real-life” pony projects. First surprise, 3 of them use a makefile (and what a makefile…): jylis, ponycheck, wallaroo + rules.mk. One of them doesn’t, but it looks like the author did put some effort in his program organisation so ponyc can build it the simple way.

                                                                                          As @bityard said, building software is complex, and no build system is smart enough to build any kind of software. All you can do is learn to use your tool so you can make a better use of them and make your work simpler.

                                                                                          Disclaimer: I never looked at pony before, so if there is something I misunderstood about how it works, please correct me.

                                                                                      2. 2

                                                                                        Build systems are hard because building software is complicated.

                                                                                        Some software? Yes. Most software? No. That’s literally the point of the first paragraph of the blog.

                                                                                        Compilers should not be the build system

                                                                                        Disagree.

                                                                                        We have abstractions, layers, and separation of concerns for a reason

                                                                                        Agree.

                                                                                        But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                                                                                        Agree, if “the compiler’s default behaviour is the only option. Which would be silly, since the blog’s first paragraph argues that some projects need more than that.

                                                                                        The good news is that for trivial projects, writing your own build system is likewise trivial as well

                                                                                        I think I showed that’s not the case. Trivial is when I don’t have to tell the computer what it already knows.

                                                                                        The author did it in 8 lines of Make but still thinks that’s too hard?

                                                                                        8 lines is infinity times the ideal number, which is 0. So yes, I think it’s too hard. It’s infinity times harder. It sounds like a 6 year old’s argument, but it doesn’t make it any less true.

                                                                                        1. 7

                                                                                          I have a few projects at work that embed Lua within the application. I also include all the modules required to run the Lua code within the executable, and that includes Lua modules written in Lua. With make I was able to add an implicit rule to generate .o files from .lua files so they could be linked in with the final executable. Had the compiler had the build system “built in” I doubt I would have been able to do that, or I still would have had to run make.

                                                                                          1. -1

                                                                                            Compilers should not be the build system

                                                                                            Disagree.

                                                                                            Please, do not ever write a compiler.

                                                                                            Your examples are ridiculous: using shell invocation and find is far, far from the simplest way to list your source, objects and output files. As other pointed out, you could use implicit rules. Even without implicit rules, that was 2 lines instead of those 8:

                                                                                            foo: foo.c main.c foo.h
                                                                                                    gcc foo.c main.c -o foo
                                                                                            

                                                                                            Agree, if “the compiler’s default behaviour is the only option.

                                                                                            Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in? This is an insane proposition, when the current solution is either the team writing the project configuring the build system as well (could be done in shell, for all that matters), or thin wrappers like Rust and Go are using around their compilers: they foster best practices while leaving the flexibility needed by heavier projects.

                                                                                            You seem so arrogant and full of yourself. You should not.

                                                                                            1. 3

                                                                                              I’d like to respectfully disagree with you here.

                                                                                              Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in?

                                                                                              That’s not at all what he’s asking for.

                                                                                              This is an insane proposition

                                                                                              I think this is probably true.

                                                                                              You seem so arrogant and full of yourself. You should not.

                                                                                              Disagree. He’s stated his opinion and provided examples demonstrating why he believe’s his point is valid. Finally, he has selectively defended said opinion. I don’t think that’s arrogance at all. This, for example, doesn’t read like arrogance to me.

                                                                                              I don’t appreciate the name calling and I don’t think it has a place here on lobste.rs.

                                                                                              1. -3

                                                                                                What is mostly arrogant is his dismissal of “dumb” tools, simple commands that will do only what they are asked to do and nothing else.

                                                                                                He wants his tools to presume his intentions. This is an arrogant design, which I find foolish, presumptuous, uselessly complex and inelegant. So I disagree on the technical aspects, certainly.

                                                                                                Now, the way he constructed his blog post and main argumentation is also extremely arrogant or in bad faith, by presenting his own errors as normal ways of doing things and accusing other people to build bad tools because they would not do things his way. This is supremely arrogant and I find it distasteful.

                                                                                                Finally, his blog is named after himself and seems a monument to his opinion. He could write on technical matters without putting his persona and ego into it, which is why I consider him full of himself.

                                                                                                My critic is that beside his technical proposition, which I disagree with, the form he uses to present them makes him a disservice by putting people he interacts with on edge. He should not if he wants his writings to be at all impactful, in my opinion.

                                                                                                1. 2

                                                                                                  the form he uses to present them makes him a disservice by putting people he interacts with on edge

                                                                                                  Pot, meet Kettle.

                                                                                                  Mirrors invite the strongest responses.

                                                                                          2. 1

                                                                                            yeah. on the flip side we have that too much configuration makes overcomplicated build systems. For me, there’s a sweet spot with cmake.

                                                                                          1. 9

                                                                                            If I touch any header, all source files that #include it will be recompiled. This happens even if it’s an additive change, which by definition can’t affect any existing source files.

                                                                                            This isn’t correct, plenty of additive changes could necessitate recompiling other translation units.

                                                                                            Just off the top of my head:

                                                                                            • Adding a field to a struct or class
                                                                                            • Adding a visibility modifier before a field on a struct or class
                                                                                            • Adding a virtual function
                                                                                            • Adding a destructor
                                                                                            • Adding a copy constructor
                                                                                            • Adding the notation that specifically tells the compiler to not automatically generate either of the above
                                                                                            • Adding a new function overload
                                                                                            • Adding a new template specialization
                                                                                            • Adding any code to a template body
                                                                                            • Adding a * after a type in any function declaration
                                                                                            • Adding a * after a type in a typedef
                                                                                            • Adding any number of preprocessor directives
                                                                                            1. 4

                                                                                              I think an “additive change” here refers to adding something completely new (that nothing existing could possibly already depend on); this is in contrast to the addition of text which could modify the semantics of existing functions or types, as I believe most of your examples do in some way.

                                                                                              1. 6

                                                                                                No build system is going to be able to, in general, distinguish those two scenarios. Consider that your definition of an additive change to foo.h depends not just on the contents of foo.h but the contents of every other header and compilation unit referenced by any compilation unit referencing foo.h, taking into account both macro expansions and compile-time Turing-complete template semantics.

                                                                                                The net effect is that you need to re-compile the entire tree of compilation units that ever reference foo.h just to determine whether or not you need to re-compile them, anyways. Otherwise how do you know whether or not int bar(short x) {return x} is a purely additive change introducing a completely never before seen function bar, or some more-specific overload of some other bar defined in one of the compilation units that included foo.h? You can’t a-priori rule that out.

                                                                                                I’m almost positive that even adding a simple variable doesn’t meet the “unambiguously additive” definition because you could construct some set of templates such that SFINAE outcomes would change in its presence. Ditto typedefs.

                                                                                                There are C macro constructs that also let you alter what the compiler sees based on whether or not a variable/function/etc exists, so even if you had a database of every single thing a compilation unit referenced on last compile and can rule out that foo.c couldn’t see any bar at last compilation it’s going to be impossible to know whether any given addition of a bar to a header would cause something new to be seen by the compiler on a subsequent compilation of foo.c, be it via macro trickery or template metaprogramming.

                                                                                                1. -2

                                                                                                  A compiler can distinguish between those scenarios. That was the whole point of the blog.

                                                                                                  1. 6

                                                                                                    Your assertion is incorrect, or at least you’re not understanding the semantics of C macros and/or C++ template metaprogramming. Holding the AST of foo.c in memory is not sufficient to determine whether or not any change to foo.h is “additive”, because any change to foo.h can in the extreme lead to a completely different AST being constructed from the textual contents of foo.c on the next compile, in addition to executing an arbitrary amount of Turing-complete c++ compile time template metaprogramming.

                                                                                                    You need to reconstruct the AST from foo.c based on the new contents of foo.h to determine whether or not the change was “additive”. That’s a recompilation of foo.c. You save nothing.

                                                                                                    1. 1

                                                                                                      Compilers are not magic. They have to process things, compile them, if you will, to know what will happen at the end. Now maybe you can just go to the AST (if your compiler does that) and know from there what changed, but you still need to compile everything to an AST again, and diffing the AST to know what comes next can be complicated and more expensive than just turning the AST into the output. Maybe, just maybe, it only makes sense for massive projects, and not your proposed 10 file project

                                                                                                2. -1

                                                                                                  That’s not what I meant, and I used C instead of C++ for a reason. I meant additive in terms of adding to the API without changing what came before it. I could have been clearer.

                                                                                                1. 9

                                                                                                  Some languages that do this well:

                                                                                                  • rust (with cargo)

                                                                                                  • d (with dub)

                                                                                                  • go (kinda) – build process itself is easy enough but the entire infrastructure around building go packages is a mess.

                                                                                                  1. 2

                                                                                                    I write D for a living. Dub is a good package manager but it’s a terrible build system. Utterly dire.

                                                                                                    1. 1

                                                                                                      Even with languages that don’t have a built in build system, newer build tools do discovery of source files. Examples in the JVM world include gradle, sbt and lein.

                                                                                                    1. 16

                                                                                                      I also think that new languages should ship with a compiler-based build system, i.e. the compiler is the build system.

                                                                                                      Doesn’t Go already do this ?

                                                                                                      1. 16

                                                                                                        I think Cargo works well at this. It’s a wrapper for the compiler, but it feels so well-integrated that the distinction doesn’t matter. I’ve never had trouble with stale files with Cargo, or force-built like I’ve had to with Make.

                                                                                                        1. 13

                                                                                                          Rustc does as much of the ‘build system’ stuff as Cargo. rustc src/main.rs finds all the files that main.rs needs to build, and builds them all at once. The only exception (i.e. all Cargo has to do) is pointing it at external libraries.

                                                                                                          With external libraries, if you have a extern crate foo in your code rustc will deal with that automagically as well if it can (searches a search path for it, you can add things to the search path with -L deps_folder). Alternatively regardless of whether or not you have an extern crate foo (as of Rust2018, prior to that it was always necessary) you can define the dependency precisely by --extern foo=path/to/foo.rlib.

                                                                                                          All cargo does, is download dependencies, build them to rlibs as well, and add those --extern foo=path/to/foo declarations (and other options like -C opt-level=3) to a rustc command line based on a config file.

                                                                                                          1. 4

                                                                                                            Oh, right! That’s neat. I did wonder whether Cargo looked through the module tree somehow, and the answer is that it doesn’t even need to.

                                                                                                        2. 5

                                                                                                          GHC tried to do this. I don’t personally feel that it was a good idea, or that it worked out very well. Fortunately, it wasn’t done in a way that interfered with the later development of Cabal.

                                                                                                          1. 2

                                                                                                            Having written a bunch of Nix code, to invoke Cabal, to set up ghc-pkg, to invoke ghc, I would say the situation is less than ideal (and don’t get me started on Stack..) ;)

                                                                                                          2. 1

                                                                                                            Not in the way I describe, no.

                                                                                                            1. 7

                                                                                                              How so? go install pretty much behaves as make install for the vast majority of projects?

                                                                                                              1. 0

                                                                                                                Did you read the blog? The reason I want the compiler to be involved is to have dependencies to be calculated at the AST node level. That’s definitely not what Go does.

                                                                                                                1. 4

                                                                                                                  I read it; I was under the impression that your main point was that build systems should “Just Work” without all sorts of Makefile muckery, and that “the compiler [should be] the build system”. The comment about AST based dependencies seemed like a footnote to this.

                                                                                                                  The go command already works like that. I suppose AST based dependencies could be added to the implementation, but I’m not sure if that would be a major benefit. The Go compiler is reasonably fast to start with (although not as fast as it used to be), and the build cache introduced in Go 1.10 works pretty well already.

                                                                                                                  1. 3

                                                                                                                    I want the compiler … to [calculate dependencies] at the AST node level. That’s definitely not what Go does.

                                                                                                                    Technically the go tool isn’t the Go compiler (6c/8c), but practically it is, and since the introduction of modules, it definitely parses and automatically resolves dependencies from source. (Even previous, separate tools like dep parsed the AST and resolved the dep graph with a single command.)

                                                                                                            1. 6

                                                                                                              I agree we can do better, and a custom language specific solutions will almost always beat out a language agnostic one. However, I didn’t see the author mention things like optimizing techniques like inlining and Link Time Optimizations (LTO), which I believe are key reasons why portions that are seemingly disparate actually end up intertwined and thus require rebuilding.

                                                                                                              1. 4

                                                                                                                Good point on LTO and optimisation. The thing is, building an optimised binary something is something I rarely do and don’t particularly care about. It’s all about running the tests for me.

                                                                                                                1. 4

                                                                                                                  Thta’s kind of what I figured, but I didn’t see it mention only debug builds. For debug builds, I totally agree with you!

                                                                                                              1. 2

                                                                                                                Good article!

                                                                                                                From my standpoint, choosing C improves the odds, as it makes it easier to get closer to the metal

                                                                                                                I don’t see how.

                                                                                                                Although some might argue C has slowed my productivity, bloated the code base, and made it susceptible to all manner of unsafety bugs

                                                                                                                Yes.

                                                                                                                that has not been my experience

                                                                                                                If not done already, fuzzing the compiler and using ASAN might be interesting.