1. 11

    Learn it? Yes. Use it? No.

    Last but not least, because C is so “low-level”, you can leverage it to write highly performant code to squeeze out CPU when performance is critical in some scenarios.

    This is true for every systems programming language in existence and is frequently easier to do in other languages.

    1. 1

      This. The article makes a good argument that you should be able to read C so you can look at the implementation of Unix tools. There is no good argument for writing C in the article.

      1. 1

        The problem is that people tend to have limited capacity for remembering things, so they use what they learn. (Or, rather, swiftly un-learn what they never use.) Therefore, an argument for learning X is often the same as an argument for using X.

      2. 1

        What are some examples of high-performance code in other systems programming languages?

        I notice a distinct lack of, say, large-scale number crunching outside of Fortran and C.

        1. 1

          Ada and Rust come to mind. Ada’s used in time- and performance-critical applications in aerospace. Rust’s metaprogramming even lets it use multicore, GPU’s, etc better. D supports unsafe if GC is slow. I think Nim does, too, with it compiling to C. People use those for performance-sensitive apps. Those would be the main contenders.

          One I have no data on is Amiga-E which folks wrote many programs for. On Lisp/Scheme side, PreScheme was basically for making “a C or assembly-level program in Lisp syntax” that compiled to C. It didn’t need any of the performance-damaging features of Lisp like GC’s or tail recursion. Probably comparable to C programs in speed.

          So, there’s a few.

          1. 1

            What are some examples of high-performance code in other systems programming languages?

            Pretty much anything written in anything. C isn’t magically fast and it’s easy to match or beat it in C++, Rust, D, Nim, …

            I notice a distinct lack of, say, large-scale number crunching outside of Fortran and C.

            Fortran, sure. But C? I have a feeling that C++ is much more used for that. CERN basically runs on the stuff. Fortran has the pointer aliasing advantage, but again, any language with templates/generics will generate code that’s just as fast.

        1. 12

          I don’t work in Go, but I’ve looked at it just for general knowledge. I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language except perhaps ACL. Every language feature, even those that are “just missing” such as exceptions in Go, is a point of contention. We learn the idiosyncrasies of each language and devise patterns (or frameworks) to work around them.

          1. 11

            I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language

            I imagine the author would be happy to work around a few small issues. After all, no language is perfect. However, pile up too many small irritations and you end up with a language which people don’t want to use, and this is a list of the myriad large and small ‘idiosyncrasies’ which have put this person off using Go.

            1. 0

              However, pile up too many small irritations and you end up with a language which people don’t want to use

              The fact that many important production/ops codebases (Hashi stack, Kubernetes, Docker, etc.) are written in Go has to be evidence that Go’s “pile of irritations” doesn’t overshadow its benefits.

              1. 6

                How irritating things are is subjective to each user.

            2. 9

              I think you’re slightly misrepresenting the point about exceptions. The issue isn’t that Go doesn’t have exceptions, but that Go doesn’t have any system for making sure you don’t accidentally ignore errors. If Go either had noisy exceptions which blow up your program unless you catch them, or if it produced an error when you don’t assign a function’s error return value to a variable, or if it had some other clever solution, it wouldn’t have been an issue. It’s just ignoring errors by default that’s problematic.

              In general I mostly agree with you though, that there are idiosyncrasies in all languages, and you just have to learn to live with them. I have written a bit of Go code myself (some professionally), and it’s fairly nice to work with in spite of its flaws. Being someone who mostly writes C, using interface{} all over the place instead of generics doesn’t even feel wrong.

              My biggest complaint about Go would probably be how it adds a reference all over the source code on where a package happens to be hosted. I’m also not a big fan of the GOPATH stuff, but that’s being phased out (though the transition is a bit weird, where I find some commands require my package to be inside of GOPATH, while others require my package to not be inside of GOPATH).

              1. 6

                I’m sorry, but at the level of nitpickiness in the article I wouldn’t like any language except perhaps ACL.

                I agree. Furthermore, these points have been raised in articles since Go was publicly announced. Each of these points are well-known, there is no use in hearing them the umpteenth time,

                Secondly, some of these points are really in the eye of the beholder, and a Go programmer would see them as strengths. E.g. (not necessarily my opinion):

                Go uses capitalization to determine identifier visibility.

                Great. Now I don’t have to look at the definition to know its visibility.

                Structs do not explicitly declare which interfaces they implement. It’s free to do that – it never promised anything.

                Which is nice, because one can make ad-hoc interfaces that the author of the package did not define. If the author wants to make a guarantee, they could assert that :

                // foo.go
                
                type Foo struct { //... }
                
                type Bar interface { //... }
                
                var _ Bar = NewFoo()
                

                There’s no ternary (?:) operator.

                (Over)use of the ternary operator leads to unreadable code.

                1. 6

                  Which is nice, because one can make ad-hoc interfaces that the author of the package did not define

                  Yes. This is called structural typing and we’ve had it since at least C++ added templates. I don’t think the article argues against this, it brings up problems with it. My issue with it in most implementations (that I know of, every one except for Haskell and Rust) is that when you want to implement an interface but haven’t because of a programming error. I’d like the compiler to tell me what I did wrong. To me, that’s the advantage of explicitly declaring an interface/trait/type class and akin to declaring variables before use.

                  (Over)use of the ternary operator leads to unreadable code.

                  Overuse of anything, including goroutines, leads to unreadable code. That’s not an argument for or against any feature. I thought the article made a perfectly good case for how the ternary operator makes the code more readable.

                  1. 3

                    My issue with it in most implementations (that I know of, every one except for Haskell and Rust)

                    Haskell and Rust do not use structural typing. Both are nominative type systems, since traits/type classes are explicitly implemented for named types.

                    that when you want to implement an interface but haven’t because of a programming error. I’d like the compiler to tell me what I did wrong.

                    As my example showed, you can do this in Go as well. When I used Go (as a stop gap between not wanting to go back to C++ and until Rust 1.0 was released), I used this approach to assert that types implement the interfaces that I wanted them to.

                    Go’s approach has different problems – you can’t implement interface methods externally for a data type that is not under your control. E.g. in Rust or Haskell, one can define a trait/type class and implement it for various ‘external’ types. In such cases, you have to wrap the data type in Go, so that you can define your own methods.

                    Overuse of anything, including goroutines, leads to unreadable code. That’s not an argument for or against any feature.

                    Not all language constructs are equal in obfuscating readability. Many modern languages choose to omit the ternary operator (e.g. Rust), because their designers believe that it leads to worse understandably.

                    1. 2

                      Rust has a ternary operator. It’s not spelled ? :, but since every if statement is an expression, it’s there: https://doc.rust-lang.org/reference/expressions/if-expr.html#if-expressions

                      1. 1

                        I know that if is an expression in Rust, use it daily :). And still people ask for ? :, because it is more terse. But a nested if expression is much easier to read than nested use of the ternary ? : operator.

                  2. 1

                    While the structural typing thing is (IMO) not a problem (it means you need to write your code in a style suited to structural typing rather than nominative typing, which everybody used to duck typing already does), other points made here are both new to me (as someone who has not written any non-trivial go code but has been casually following it since its release) and sort of shocking.

                    Capitalization for identifier visibility is, as you mentioned, sometimes a time-saver when reading code, but as OP mentioned, both less expressive than multi-tier scoping rules & a potential source of enormous diffs when refactoring. This is a case where, fairly unambiguously, something that seems like a clever idea on first blush has massive knock-on effects that large projects need to work around & that affect iteration speed (because you sometimes need to search & replace identifiers in a whole module, & if you’re not careful to write your code in a style where private identifiers that you think might need to become public in the future are rarely referenced directly, you’re liable to have a diff on nearly every line). I’m generally in favor of capitalization having semantic meaning (I like how, in prolog-influenced languages, capitalization distinguishes identifiers from atoms), but nothing indicated by part of an identifier name ought to be something that changes during implementation unless most uses of that identifier will also already need to change to match.

                    The absence of a ternary operator is sort of shocking. It’s not the compiler’s job to paternalistically enforce good style, and using the ternary operator judiciously is often better style than avoiding it entirely (especially when what is conceptually a single expression – say, an assignment – needs a default case). Languages that lack a ternary operator tend to support || in expressions in order to handle the most common case – handling defaults in the case of a null or nullish value. Leaving out the ternary operator feels philosophically out of line in go, and more in line with paternalistic languages like Java. (Although Java has a ternary operator, it does all sorts of other things at the compiler level to enforce an idea of ‘good style’ that, because it lacks nuance, often makes code substantially worse & harder to read.)

                    Go’s error handling (or lack thereof) is familiar & has been covered before, so I don’t think it surprises anybody. It’s basically like C’s. OP seems to mention it for the sake of completeness – since, yes, it’s easier to silently ignore errors when you don’t have an exception-like sytem, and the tendency of C code to accidentally ignore important errors is precisely why most modern languages have exceptions.

                1. 6

                  Thank you engagement @atilaneves !

                  3 questions/topics from my side

                  a) can BetterC be used to link/leverage large C++ libraries (eg QT or boost). That is, can BetterC be used as essentially C++ replacement (and without D’s GC, D’s standard library (Phobos), and any other D-run time dependencies) ? For example, can I build a QT or wxWidget based app for FreeBSD, Linux, Windows, MacOS using BetterC and QT only?

                  b) Can you describe for, us, non-D folks, the DIP1000 (this seems to be a feature implementing Rust-like semantic for pointers… but it compare/contrast was not clear)

                  c) Mobile app development – does D have roadmap/production ready capabilities in this area, and for which platforms

                  Thank you again for your time.

                  1. 5

                    I don’t see how betterC helps with calling C++. D can already call C++ now, it’s just not easy, especially if they’re heavily templated.

                    DIP1000 is D’s answer to Rust’s borrow checker. You can read the dip here. Essentially it makes it so you can’t escape pointers.

                    There’s been some work done for Android, but the person doing that left the community. It was possible to run D there, but I’m not sure what the current status is.

                    1. 3

                      Thank you.

                      WRT C++ D compatibility, I watched a video for this paper https://www.walterbright.com/cppint.pdf but, if I remember right, it was 2015 – and I could not figure out if, after D was officially included in GCC, there were any updates to C++ ABI compatibility feature.

                      1. 1

                        The ABI should just work. Otherwise it’s a bug.

                  1. 8

                    The talk covers the psychology of language adoption, how C++ conquered the world and how D can learn/shamelessly streal from it.

                    OP and speaker here. AMA.

                    1. 15

                      In a way this is why I use Go. I like the fact that not every feature that could be implemented is implemented. I think there’s better languages if you’d want that. Don’t use a language either just because it is by Google.

                      Also I think that it is actually more the core team, than Google. I think that Go if it was the company would be much different, than we have now. It would probably look more like Java or Dart.

                      One needs to see the context. Go is by people with a philosophy in the realms of “more is less” and “keep it simple”, so community wise closer to Plan9, suckless, cat-v, OpenBSD, etc. That is people taking pride in not creating something for everyone.

                      However unlike the above the language was hyped a lot, especially because it is by Google and especially because it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                      I think generics are just the most prominent example of “why can’t I have this?”. Compare it with the communities mentioned above. Various suckless software, Plan9, OpenBSD. If somehow all the Linux people would be thrown onto OpenBSD a lot of them would probably scream to Theo about how they want this and that and there would probably be some major thing they “cannot have”.

                      While I don’t disagree with “Go is owned by Google” I think on the design side (and generics are a part of that) I’d say it’s owned by a core team with mostly aligned ideas. While I also think that Google certainly has a bigger say even on the design side than the rest of the world I think the same constellation of authors, independently of Google would have lead to a similar language, with probably way fewer users, available libraries and I also don’t think Docker and other projects would have picked it up, at least not that early.

                      Of course there’s other things such as easy concurrency that could have played a role in adoption, but Go would probably have had a lot of downsides. It probably would have a lot less performance improvements and slower garbage collection, because I don’t think their would be many people working so much in that area.

                      So to sum it up. While Google probably has a lot of say, I don’t think that is the reason for not having generics. Maybe it is even that Go doesn’t have generics (yet) despite Google. After all they are a company where a large part of the developers have generics in their day to day programming language.

                      EDIT: Given their needs I could imagine that Google for example was the (initial) cause for type aliases. I could be wrong of course.

                      1. 8

                        it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                        Personally, I think this had a lot to do with historical context. There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time. I think there were a lot of people suffering from “interpreter fatigue” (I’ve read several times that Python developers flocked to Go early on, for example). So I think that, for quite a few people, Go is just the least undesirable option, which helps explain why everyone has something they want it to do differently.

                        Speaking for myself, I dislike several of the design decisions that went into Go, but I use it regularly because for the things it’s good at, it’s really, really good.

                        1. 5

                          There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time.

                          Have you looked at “another language”, and if so, what are your thoughts?

                          1. 4

                            Not a whole lot. My superficial impression has been that it is pretty complicated and would require a pretty substantial effort to reach proficiency. That isn’t necessarily a bad thing, but it kept me from learning it in my spare time. I could be totally wrong, of course.

                          2. 4

                            There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time

                            D has both.

                            1. 2

                              I completely agree with your statement regarding the benefits and that this is certainly a reason to switch to Go.

                              That comment wasn’t meant to say that there is no reason to pick up Go, but more that despite the benefits you mentioned if there wasn’t a big company like Google backing it, it might have gone unnoticed or at least other companies would have waited longer with adopting it, meaning that I find it unlikely it would be where it is today.

                              What I mean is that a certain hype and a big company behind is a factor of this being “a good option” for much more people, especially when arguing for a relatively young language “not even having classes and generics” and a fairly primitive/simple garbage collector in the beginning.

                              Said communities tend to value these benefits much higher than the average and align very well in terms of what people emphasized. But of course it’s not like one could be sure what would have happened and I am also drifting off a bit.

                          1. 3

                            Having to work to tell a computer what it already knows is one of my pet peeves.

                            A type is not only for the computer, it’s also for the human reading the source. The more you leave to the computer to workout on its own, the more the human reading you will have to hunt for that now hidden information.

                            I also believe that wanting to know the exact type of a variable is a, for lack of a better term, “development smell”, especially in typed languages with generics.

                            Why?

                            I think that the possible operations on a type are what matters, and figuring out the exact type if needed is a tooling problem.

                            Yes, tooling can overcome the issue described above (will have to hunt for that now hidden information), but it is not explaining why it is preferrable. What is gained by doing so?

                            1. 6

                              A type is not only for the computer, it’s also for the human reading the source

                              It depends. It doesn’t really matter if myfunc returns a std::vector or std::list if all I’m doing is filtering the results. It does matter however that the type has .cbegin() and .cend() iterators.

                              Why?

                              Because of what I wrote just afterwards: “I think that the possible operations on a type are what matters”.

                              but it is not explaining why it is preferrable. What is gained by doing so?

                              Personally, I hardly ever care what type something is unless it’s terribly named. Otherwise it’s getting passed to another function/algorithm anyway. Even if I did know the type, I’d more likely than not have to jump to its definition to find out / remember what it is and what I can do with it.

                              As for what is gained: refactoring. With auto I don’t have to change all the variable declarations. There’s also the avoidance of implicit conversions.

                              1. 1

                                If the code-base is already good enough, that all implementations of an interface are interchangeable (no side-effect specific to a type that one would have to worry about), wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?

                                You write that you wonder why this is a debate at all. But then, type inference is a “heavy” feature in a language and reduce code legibility. It has to demonstrate a clear advantage for such debate to be settled.

                                C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference. There is still no clear advantage to having it (I mean, it won’t really come into consideration when choosing a language to work with).

                                1. 3

                                  Here’s a way to settle the debate: remove local type inference from a language that has it, and see how users react.

                                  Of course, you’ll have to find a language designer willing to do such a thing, but then you’ll see if e.g. legibility is actually an issue: if people complain that adding local type inference reduces legibility (as you say), and just as many people complain that removing it also reduces legibility, then you may be able to make a claim as to whether “legibility” is a subjective property.

                                  1. 2

                                    wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?

                                    Even so, who wants large diffs for no reason?

                                    But then, type inference is a “heavy” feature in a language and reduce code legibility

                                    Which seems to be the point of the people who prompted me to write the post in the first place. I understand that that’s your opinion, and it’s one I don’t agree with. What’s odd is that I’m not the only one - there are many languages where this debate doesn’t seem to happen.

                                    C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference

                                    It depends on your definition of bloated. Would C be bloated if all we did was add type inference to it?

                              1. 5

                                Huge congratulations!

                                Some questions I managed to remember from the time when I tried, uhmh, to write my own build utility and discovered, as many folks do, that it’s waaay harder than it looks:

                                • (how) does it handle auto-detection of .h files included from a .c file?
                                • does it detect removed files to discover a file needs rebuilding? e.g. if foo.c has #include <bar.h>, and after compilation bar.h file is removed from disk, does it try to rebuild foo.c (and reports an error)?
                                • how can I add extra compilation flags: to some files in the project? to all files in the project? to some linked libraries?
                                • would it be able to correctly support Java files “intelligently”? taking into account that compiling a .java file can emit >1 .class files? (e.g. because of inner classes)
                                1. 3

                                  Normally, dependencies are determined by the compiler. That’s how CMake and ninja do it.

                                  1. 2

                                    Yes, but there are tricky situations, and each of CMake and ninja has to either have a way to handle them in some way, or risk missing them. I.e. “corner cases” to include in every respectable build system’s test case. The ”…[#include-d] bar.h file is removed from disk …” scenario, that I tried to describe tersely above, is one such scenario. Java source files, with some of them emitting 2+ .class files from one .java file, are another tricky case. If I learn of a build system that demonstrably handles both those cases correctly, in an automated way (without having to resolve those situations/files by hand), I’d be immediately interested in it. Don’t know of such a system yet, that wouldn’t be also a huge monstrosity like maven or something.

                                  2. 2

                                    Xmake has dealt with these issues.

                                    1. 1

                                      Can you explain a bit more how is this handled in xmake? I am really curious to undrestand!

                                      1. 7

                                        (how) does it handle auto-detection of .h files included from a .c file?

                                        xmake will add -H flags to get all dependent .h files for gcc when compiling *.c files and check their modified time.

                                        does it detect removed files to discover a file needs rebuilding? e.g. if foo.c has #include <bar.h>, and after compilation bar.h file is removed from disk, does it try to rebuild foo.c (and reports an error)?

                                        xmake also will cache all dependent .h file list to the dependence files (*.d) to check them when building.

                                        how can I add extra compilation flags: to some files in the project?

                                        add_files("src/test.c", {cflags = "-Dxxx", defines = "yyy"})
                                        

                                        to all files in the project?

                                        add_cflags("-Dxxx") -- add flags to root scope for all targets
                                        target("test")
                                            add_files("src/*.c")
                                        
                                        target("test2")
                                            add_files("src2/*.c")
                                            add_cflags("-Dyyy")  -- only for target test2
                                        

                                        to some linked libraries?

                                        target("test")
                                            add_files("src/*.c")
                                            add_links("pthread")
                                            add_linkdirs("xxx")
                                        

                                        would it be able to correctly support Java files “intelligently”? taking into account that compiling a .java file can emit >1 .class files? (e.g. because of inner classes)

                                        Does not support for java now.

                                    2. 1

                                      I write C and C++ but I use GNU Make.

                                      For question #1 and #2, can’t you just add -MMD -MP -MT compiler switches to gcc and clang? (I have no idea if Visual C++ supports these since I don’t use that compiler and just use MinGW on Windows), then in your makefile just include the .d files generated.

                                      1. 2

                                        Yes you can do that, but supporting MSVC is very critical. Majority of developers use Windows, so if your support for Windows isn’t first rate, it greatly reduces your probability of success.

                                        1. 2

                                          We can add -showIncludes flags to get include dependences and generate .d files for MSVC when building files. @akavel

                                          1. 1

                                            I seem to recall that it was exactly the .d files + make which had trouble detecting a removed .h file. Though I may be wrong, it’s been a long time. Still, given the trickiness, unless I see an explicit test case…

                                        1. 34

                                          Build systems are hard because building software is complicated.

                                          Maybe it’s the first commit in brand new repository and all you have is foo.c in there. Why am I telling the compiler what to build? What else would it build??

                                          Compilers should not be the build system, their job is to compile. We have abstractions, layers, and separation of concerns for a reason. Some of those reasons are explained in http://www.catb.org/~esr/writings/taoup/html/ch01s06.html. But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                                          The good news is that for trivial projects, writing your own build system is likewise trivial as well. You could do it in a few lines of bash if you want. The author did it in 8 lines of Make but still thinks that’s too hard? I mean, this is like buying a bicycle to get you all around town and then complaining that you have stop once a month and spend 5 minutes cleaning and greasing the chain. Everyone just looks at you and says, “Yes? And?”

                                          1. 5

                                            The author could have done it in two if he knew Make. And no lines if he just has a single file project. One of the more complex projects I have uses only 50 lines of Make, with 6 lines (one implicit rule, and 5 targets) doing the actual build (the rest are various defines).

                                            1. 3

                                              What are the two lines?

                                              1. 4

                                                I’m unsure what the two lines could be, but for no lines I think spc476 is talking about using implicit rules (http://www.delorie.com/gnu/docs/make/make_101.html) and just calling “make foo”

                                                1. 2

                                                  I tried writing it with implicit rules. Unless I missed something, they only kick in if the source files and the object files are in the same directory. If I’m wrong, please enlighten me. I mentioned the build directory for a reason.

                                                  1. 2

                                                    Right, the no lines situation only applies for the single file project setup. I don’t know what are the 2 lines for the example given in the post.

                                                2. 3

                                                  First off, it would build the executable in the same location as the source files. Sadly, I eventually gave up on a separate build directory to simplify the makefile. So with that out of the way:

                                                  CFLAGS ?= -Iinclude -Wall -Wextra -Werror -g
                                                  src/foo: $(patsubst %.c,%.o,$(wildcard src/*.c))
                                                  

                                                  If you want dependencies, then four lines would suffice—the two above plus these two (and I’m using GNUMake if that isn’t apparent):

                                                  .PHONY: depend
                                                  depend:
                                                      makedepend -Y -- $(CFLAGS) -- $(wildcard src/*.c) 
                                                  

                                                  The target depend will modify the makefile with the proper dependencies for the source files. Okay, make that GNUMake and makedepend.

                                                3. 1

                                                  Structure:

                                                  .
                                                  ├── Makefile
                                                  ├── include
                                                  │   └── foo.h
                                                  └── src
                                                      ├── foo.c
                                                      └── prog.c
                                                  

                                                  Makefile:

                                                  CFLAGS = -Iinclude
                                                  VPATH = src:include
                                                  
                                                  prog: prog.c foo.o
                                                  foo.o: foo.c foo.h
                                                  

                                                  Build it:

                                                  $ make
                                                  cc -Iinclude   -c -o foo.o src/foo.c
                                                  cc -Iinclude    src/prog.c foo.o   -o prog
                                                  
                                                  1. 1

                                                    Could you please post said two lines? Thanks.

                                                    1. 4

                                                      make could totally handle this project with a single line actually:

                                                      foo: foo.c main.c foo.h
                                                      

                                                      That’s more than enough to build the project (replace .c with .o if you want the object files to be generated). Having subdirectories would make it more complex indeed, but for building simple project, we can use a simple organisation! Implicit rules are made for a case where source and include files are in the same directory as the Makefile. Now we could argue wether or not it’s a good practice or not. Maybe make should have implicit rules hardcoded for src/, include/ and build/ directories. Maybe not.

                                                      In your post you say that Pony does it the good way by having the compiler be the build system, and build project in a simple way by default Maybe ponyc is aware of directories like src/ and include/, and that could be an improvement over make here. But that doesn’t make its build system simple. When you go the the ponylang website, you find links to “real-life” pony projects. First surprise, 3 of them use a makefile (and what a makefile…): jylis, ponycheck, wallaroo + rules.mk. One of them doesn’t, but it looks like the author did put some effort in his program organisation so ponyc can build it the simple way.

                                                      As @bityard said, building software is complex, and no build system is smart enough to build any kind of software. All you can do is learn to use your tool so you can make a better use of them and make your work simpler.

                                                      Disclaimer: I never looked at pony before, so if there is something I misunderstood about how it works, please correct me.

                                                  2. 2

                                                    Build systems are hard because building software is complicated.

                                                    Some software? Yes. Most software? No. That’s literally the point of the first paragraph of the blog.

                                                    Compilers should not be the build system

                                                    Disagree.

                                                    We have abstractions, layers, and separation of concerns for a reason

                                                    Agree.

                                                    But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                                                    Agree, if “the compiler’s default behaviour is the only option. Which would be silly, since the blog’s first paragraph argues that some projects need more than that.

                                                    The good news is that for trivial projects, writing your own build system is likewise trivial as well

                                                    I think I showed that’s not the case. Trivial is when I don’t have to tell the computer what it already knows.

                                                    The author did it in 8 lines of Make but still thinks that’s too hard?

                                                    8 lines is infinity times the ideal number, which is 0. So yes, I think it’s too hard. It’s infinity times harder. It sounds like a 6 year old’s argument, but it doesn’t make it any less true.

                                                    1. 7

                                                      I have a few projects at work that embed Lua within the application. I also include all the modules required to run the Lua code within the executable, and that includes Lua modules written in Lua. With make I was able to add an implicit rule to generate .o files from .lua files so they could be linked in with the final executable. Had the compiler had the build system “built in” I doubt I would have been able to do that, or I still would have had to run make.

                                                      1. -1

                                                        Compilers should not be the build system

                                                        Disagree.

                                                        Please, do not ever write a compiler.

                                                        Your examples are ridiculous: using shell invocation and find is far, far from the simplest way to list your source, objects and output files. As other pointed out, you could use implicit rules. Even without implicit rules, that was 2 lines instead of those 8:

                                                        foo: foo.c main.c foo.h
                                                                gcc foo.c main.c -o foo
                                                        

                                                        Agree, if “the compiler’s default behaviour is the only option.

                                                        Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in? This is an insane proposition, when the current solution is either the team writing the project configuring the build system as well (could be done in shell, for all that matters), or thin wrappers like Rust and Go are using around their compilers: they foster best practices while leaving the flexibility needed by heavier projects.

                                                        You seem so arrogant and full of yourself. You should not.

                                                        1. 3

                                                          I’d like to respectfully disagree with you here.

                                                          Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in?

                                                          That’s not at all what he’s asking for.

                                                          This is an insane proposition

                                                          I think this is probably true.

                                                          You seem so arrogant and full of yourself. You should not.

                                                          Disagree. He’s stated his opinion and provided examples demonstrating why he believe’s his point is valid. Finally, he has selectively defended said opinion. I don’t think that’s arrogance at all. This, for example, doesn’t read like arrogance to me.

                                                          I don’t appreciate the name calling and I don’t think it has a place here on lobste.rs.

                                                          1. -3

                                                            What is mostly arrogant is his dismissal of “dumb” tools, simple commands that will do only what they are asked to do and nothing else.

                                                            He wants his tools to presume his intentions. This is an arrogant design, which I find foolish, presumptuous, uselessly complex and inelegant. So I disagree on the technical aspects, certainly.

                                                            Now, the way he constructed his blog post and main argumentation is also extremely arrogant or in bad faith, by presenting his own errors as normal ways of doing things and accusing other people to build bad tools because they would not do things his way. This is supremely arrogant and I find it distasteful.

                                                            Finally, his blog is named after himself and seems a monument to his opinion. He could write on technical matters without putting his persona and ego into it, which is why I consider him full of himself.

                                                            My critic is that beside his technical proposition, which I disagree with, the form he uses to present them makes him a disservice by putting people he interacts with on edge. He should not if he wants his writings to be at all impactful, in my opinion.

                                                            1. 2

                                                              the form he uses to present them makes him a disservice by putting people he interacts with on edge

                                                              Pot, meet Kettle.

                                                              Mirrors invite the strongest responses.

                                                      2. 1

                                                        yeah. on the flip side we have that too much configuration makes overcomplicated build systems. For me, there’s a sweet spot with cmake.

                                                      1. 9

                                                        If I touch any header, all source files that #include it will be recompiled. This happens even if it’s an additive change, which by definition can’t affect any existing source files.

                                                        This isn’t correct, plenty of additive changes could necessitate recompiling other translation units.

                                                        Just off the top of my head:

                                                        • Adding a field to a struct or class
                                                        • Adding a visibility modifier before a field on a struct or class
                                                        • Adding a virtual function
                                                        • Adding a destructor
                                                        • Adding a copy constructor
                                                        • Adding the notation that specifically tells the compiler to not automatically generate either of the above
                                                        • Adding a new function overload
                                                        • Adding a new template specialization
                                                        • Adding any code to a template body
                                                        • Adding a * after a type in any function declaration
                                                        • Adding a * after a type in a typedef
                                                        • Adding any number of preprocessor directives
                                                        1. 4

                                                          I think an “additive change” here refers to adding something completely new (that nothing existing could possibly already depend on); this is in contrast to the addition of text which could modify the semantics of existing functions or types, as I believe most of your examples do in some way.

                                                          1. 6

                                                            No build system is going to be able to, in general, distinguish those two scenarios. Consider that your definition of an additive change to foo.h depends not just on the contents of foo.h but the contents of every other header and compilation unit referenced by any compilation unit referencing foo.h, taking into account both macro expansions and compile-time Turing-complete template semantics.

                                                            The net effect is that you need to re-compile the entire tree of compilation units that ever reference foo.h just to determine whether or not you need to re-compile them, anyways. Otherwise how do you know whether or not int bar(short x) {return x} is a purely additive change introducing a completely never before seen function bar, or some more-specific overload of some other bar defined in one of the compilation units that included foo.h? You can’t a-priori rule that out.

                                                            I’m almost positive that even adding a simple variable doesn’t meet the “unambiguously additive” definition because you could construct some set of templates such that SFINAE outcomes would change in its presence. Ditto typedefs.

                                                            There are C macro constructs that also let you alter what the compiler sees based on whether or not a variable/function/etc exists, so even if you had a database of every single thing a compilation unit referenced on last compile and can rule out that foo.c couldn’t see any bar at last compilation it’s going to be impossible to know whether any given addition of a bar to a header would cause something new to be seen by the compiler on a subsequent compilation of foo.c, be it via macro trickery or template metaprogramming.

                                                            1. -2

                                                              A compiler can distinguish between those scenarios. That was the whole point of the blog.

                                                              1. 6

                                                                Your assertion is incorrect, or at least you’re not understanding the semantics of C macros and/or C++ template metaprogramming. Holding the AST of foo.c in memory is not sufficient to determine whether or not any change to foo.h is “additive”, because any change to foo.h can in the extreme lead to a completely different AST being constructed from the textual contents of foo.c on the next compile, in addition to executing an arbitrary amount of Turing-complete c++ compile time template metaprogramming.

                                                                You need to reconstruct the AST from foo.c based on the new contents of foo.h to determine whether or not the change was “additive”. That’s a recompilation of foo.c. You save nothing.

                                                                1. 1

                                                                  Compilers are not magic. They have to process things, compile them, if you will, to know what will happen at the end. Now maybe you can just go to the AST (if your compiler does that) and know from there what changed, but you still need to compile everything to an AST again, and diffing the AST to know what comes next can be complicated and more expensive than just turning the AST into the output. Maybe, just maybe, it only makes sense for massive projects, and not your proposed 10 file project

                                                            2. -1

                                                              That’s not what I meant, and I used C instead of C++ for a reason. I meant additive in terms of adding to the API without changing what came before it. I could have been clearer.

                                                            1. 9

                                                              Some languages that do this well:

                                                              • rust (with cargo)

                                                              • d (with dub)

                                                              • go (kinda) – build process itself is easy enough but the entire infrastructure around building go packages is a mess.

                                                              1. 2

                                                                I write D for a living. Dub is a good package manager but it’s a terrible build system. Utterly dire.

                                                                1. 1

                                                                  Even with languages that don’t have a built in build system, newer build tools do discovery of source files. Examples in the JVM world include gradle, sbt and lein.

                                                                1. 16

                                                                  I also think that new languages should ship with a compiler-based build system, i.e. the compiler is the build system.

                                                                  Doesn’t Go already do this ?

                                                                  1. 16

                                                                    I think Cargo works well at this. It’s a wrapper for the compiler, but it feels so well-integrated that the distinction doesn’t matter. I’ve never had trouble with stale files with Cargo, or force-built like I’ve had to with Make.

                                                                    1. 13

                                                                      Rustc does as much of the ‘build system’ stuff as Cargo. rustc src/main.rs finds all the files that main.rs needs to build, and builds them all at once. The only exception (i.e. all Cargo has to do) is pointing it at external libraries.

                                                                      With external libraries, if you have a extern crate foo in your code rustc will deal with that automagically as well if it can (searches a search path for it, you can add things to the search path with -L deps_folder). Alternatively regardless of whether or not you have an extern crate foo (as of Rust2018, prior to that it was always necessary) you can define the dependency precisely by --extern foo=path/to/foo.rlib.

                                                                      All cargo does, is download dependencies, build them to rlibs as well, and add those --extern foo=path/to/foo declarations (and other options like -C opt-level=3) to a rustc command line based on a config file.

                                                                      1. 4

                                                                        Oh, right! That’s neat. I did wonder whether Cargo looked through the module tree somehow, and the answer is that it doesn’t even need to.

                                                                    2. 5

                                                                      GHC tried to do this. I don’t personally feel that it was a good idea, or that it worked out very well. Fortunately, it wasn’t done in a way that interfered with the later development of Cabal.

                                                                      1. 2

                                                                        Having written a bunch of Nix code, to invoke Cabal, to set up ghc-pkg, to invoke ghc, I would say the situation is less than ideal (and don’t get me started on Stack..) ;)

                                                                      2. 1

                                                                        Not in the way I describe, no.

                                                                        1. 7

                                                                          How so? go install pretty much behaves as make install for the vast majority of projects?

                                                                          1. 0

                                                                            Did you read the blog? The reason I want the compiler to be involved is to have dependencies to be calculated at the AST node level. That’s definitely not what Go does.

                                                                            1. 4

                                                                              I read it; I was under the impression that your main point was that build systems should “Just Work” without all sorts of Makefile muckery, and that “the compiler [should be] the build system”. The comment about AST based dependencies seemed like a footnote to this.

                                                                              The go command already works like that. I suppose AST based dependencies could be added to the implementation, but I’m not sure if that would be a major benefit. The Go compiler is reasonably fast to start with (although not as fast as it used to be), and the build cache introduced in Go 1.10 works pretty well already.

                                                                              1. 3

                                                                                I want the compiler … to [calculate dependencies] at the AST node level. That’s definitely not what Go does.

                                                                                Technically the go tool isn’t the Go compiler (6c/8c), but practically it is, and since the introduction of modules, it definitely parses and automatically resolves dependencies from source. (Even previous, separate tools like dep parsed the AST and resolved the dep graph with a single command.)

                                                                        1. 6

                                                                          I agree we can do better, and a custom language specific solutions will almost always beat out a language agnostic one. However, I didn’t see the author mention things like optimizing techniques like inlining and Link Time Optimizations (LTO), which I believe are key reasons why portions that are seemingly disparate actually end up intertwined and thus require rebuilding.

                                                                          1. 4

                                                                            Good point on LTO and optimisation. The thing is, building an optimised binary something is something I rarely do and don’t particularly care about. It’s all about running the tests for me.

                                                                            1. 4

                                                                              Thta’s kind of what I figured, but I didn’t see it mention only debug builds. For debug builds, I totally agree with you!

                                                                          1. 2

                                                                            Good article!

                                                                            From my standpoint, choosing C improves the odds, as it makes it easier to get closer to the metal

                                                                            I don’t see how.

                                                                            Although some might argue C has slowed my productivity, bloated the code base, and made it susceptible to all manner of unsafety bugs

                                                                            Yes.

                                                                            that has not been my experience

                                                                            If not done already, fuzzing the compiler and using ASAN might be interesting.

                                                                            1. 1

                                                                              I, a unit test aficionado, posit that sometimes unit testing is a bad choice for your code. AMA.

                                                                              1. 8

                                                                                I posit that anything, treated uncritically and/or cargo culted, is a bad choice for your code. Terms like “unit test” have a brief lifetime as an interesting new category that may stimulate good new thoughts…after which they beg to be overconstrained, dogmatized, and rendered nothing but clickbait. Just write tests that test the behaviors you care about, as well as you can under the circumstances, as clearly and maintainable as possible—you know, just like the rest of the code—and don’t worry about what you call them. :)

                                                                                1. 1

                                                                                  Sure, I thought it was an interesting perspective from someone who prefers unit tests but thought that there was a better way to test the code in question.

                                                                                  1. 5

                                                                                    I find it’s more important to be clear about the scope of the behavior you’re trying to test than to apply arbitrary terms to describe tests.

                                                                                    For example, if you’re trying to test the runtime behavior of the makefile (essentially using cmake as a library to accomplish the task of the function under test), then yes, that’s not a good test. On the other hand, if the function is supposed to make human-readable makefiles and you want to check the indentation and comments in the makefile, that might be a great way to write that test. The term “unit test” does nothing to assist that analysis.

                                                                                    The point of the article that I think is profoundly important is: only test the thing you’re testing, and avoid accidentally testing things you didn’t mean to test.

                                                                                    1. 1

                                                                                      For example, if you’re trying to test the runtime behavior of the makefile (essentially using cmake as a library to accomplish the task of the function under test), then yes, that’s not a good test. On the other hand, if the function is supposed to make human-readable makefiles and you want to check the indentation and comments in the makefile, that might be a great way to write that test. The term “unit test” does nothing to assist that analysis.

                                                                                      I like this way of thinking, thanks for the insight.

                                                                                2. 5

                                                                                  I — also a unit test aficionado — posit that unit tests have no inherent value. In fact unit tests are a poor choice for verifying any behaviour that could otherwise be verified by cheaper means, e.g., types, property based testing, etc.

                                                                                  1. 2

                                                                                    Property testing is a form of unit testing, isn’t it?

                                                                                    1. 9

                                                                                      Technically speaking, “unit test” is about scope while “property based test” is about method and generativity. Seen this way, most PBTs are unit tests.

                                                                                      In practice, people usually mean “unit test” to mean manual oracle unit tests, and PBT to mean generative property tests of all scopes (but usually unit).

                                                                                    2. 1

                                                                                      It’s not either/or though. Sometimes types can’t get you there. Sometimes contracts can’t get you there. Even when they can, code can be buggy anyway. My first and only Haskell project compiled and passed all tests. And yet, it didn’t work at all as intended.

                                                                                      1. 3

                                                                                        Certainly. And I’m in no way suggesting that types will do everything and that they obviate the need for tests.

                                                                                        The reason for my — admittedly rather provocative — statement, is because it seems a prevalent idea in the industry (especially in the Software Craftsmanship community) that if unit tests aren’t your panacea, then you just aren’t unit testing hard enough. I think this is a damaging idea, and it deserves some pushback.

                                                                                  1. 6

                                                                                    In my experience, integration tests have the best ROI as they can be relatively simple to write, and test a large area of the code. They are good at spotting issues deep inside the code that might not be obvious at first, and perhaps even wouldn’t be caught by unit testing.

                                                                                    Unit testing on the other hand is perfect for testing library-like code, say your utility library to manage ut8 strings or date calculations, etc.

                                                                                    1. 3

                                                                                      I think that there are ways of architecting the system such that unit tests become better than integration tests, namely hexagonal testing. In my experience people usually write end-to-end tests because they’re easier to write then wonder why CI is always red when they have 1000 of those. They’re flaky, and nearly as bad, slow to run, which makes people not run them after every change.

                                                                                      1. 4

                                                                                        What is “hexagonal testing?” Because trying to Google that leads to athletics testing, not program testing.

                                                                                          1. 2

                                                                                            Thanks.

                                                                                      2. 1

                                                                                        Sort of countering and corroborating that, I think usage-based testing is highest ROI as advocated by Cleanroom in 1980’s. Their theory was it’s better for software to have a 100 bugs users never see than even 5 they often do. Their perception would drive the acceptance and spread of the software. So, you look at the ways they can use the project, game out any ways to turn it into tests, and run those first. Maximizes the perceived quality.

                                                                                        Then, integration. Then, unit. These will improve actual quality but might take more resources. I said corroborate since the usage-based tests will likely be a form of integration testing since they test many features at once. Acceptance testing is the common term for this, though, since it’s usually black box.

                                                                                      1. 6

                                                                                        Some random thoughts:

                                                                                        Unit tests are one tool in a big toolbox. But what do we verify? I think you could make a distinction:

                                                                                        • Verify that the data a program outputs satisfies some constraints given by the inputs. Unit tests, types, etc. can do this job well.
                                                                                        • Verify the behavior that a system exhibits satisfies some constraints. Here Integration tests are useful, but fail when the system changes.

                                                                                        We assume that the encoding of these constraints is simpler than the program itself and have appropriate specificity, which is not always the case. Also, the ergonomics of your verification strategy may encourage you to be too specific.

                                                                                        The article describes a problem of being too specific something that unit testing encourages. Integration testing can also suffer from the same problem of being overly specific. It doesn’t encourage it as much i believe.

                                                                                        1. 2

                                                                                          I think unit tests can be best summarized as confirming that raw computation works. Does the adder add? Does the calculator of user progress return what it should? Does your check for competing resources actually check the correct state?

                                                                                          Anything outside that, beyond simple matters for the “Difference Engine,” shouldn’t have a unit test.

                                                                                          1. 2

                                                                                            Thanks for the comment. I hadn’t considered the notion that integration testing could suffer from the same problem. You’ve sparked some new thoughts in my brain and that’s always welcome!

                                                                                          1. 1

                                                                                            I’m curious if the author used C++11 and later since this was written and if their opinion changed.

                                                                                            1. 2

                                                                                              Well they haven’t exactly removed much from the language, have they?

                                                                                              1. 3

                                                                                                That’s why the article should be at least 2x longer ;)

                                                                                                OTOH, adding some new features can deprecate old ones, i.e. adding an std::expected<T, E> can actually produce code that doesn’t use exceptions (not that I’m saying exceptions should be deprecated; they have their uses, just don’t use them as a substitute for return false). So adding stuff can make things better.

                                                                                                1. 1

                                                                                                  Of course adding stuff can make things better. Nobody on the C++ standards committee added features to make things worse. It’s however undeniable that the additions have made a large language even more gigantic and harder to learn.

                                                                                              2. 1

                                                                                                As someone who is not too familiar with the details of C++11 and C++17, did they address any of the issues that the author raised?

                                                                                                1. 2

                                                                                                  No.

                                                                                                  Every serious C++ project I’ve worked on has banned or heavily frowned upon the STL and most language features, and C++11/14/17 didn’t change that. Each version brings a few nice things but most of it is unusable.

                                                                                                2. 1

                                                                                                  Looks like they wrote a post about what they would like to see in a new C++ replacement and C++11 and over hit some of the points. https://apenwarr.ca/log/20100721

                                                                                                  However, they think templates are awful and they should bad about that.

                                                                                                1. 3

                                                                                                  Author here, AMA.

                                                                                                  1. 7

                                                                                                    This article is full of factual errors. It confuses source files with translation units, seems to think that there are always a foo.h/foo.{c,cpp} duality (the implementation of a header could be spread amongst several source files), and keeps talking about linking to .cpp files when one links object files instead.

                                                                                                    unlike most languages, C++ splits code into headers and translation-units

                                                                                                    No it doesn’t. It splits code into headers and implementation files. Translation units are (roughly) the result of running the preprocessor on a source file.

                                                                                                    Headers are not necessarily evil.

                                                                                                    Yes, they are. The only reason they exist is because when C was invented computers didn’t have enough RAM to compile a whole program.

                                                                                                    We can query the compiler for actual header-usage.

                                                                                                    That’s what ninja does. It’s not new.

                                                                                                    I don’t use C++ much these days, but if I did I’d be wary of using a package manager that doesn’t understand how C++ is built or what translation units are.

                                                                                                    1. 1

                                                                                                      It confuses source files with translation units

                                                                                                      They could be clearer, yes.

                                                                                                      seems to think that there are always a foo.h/foo.{c,cpp} duality

                                                                                                      The article did not read that way to me.

                                                                                                      keeps talking about linking to .cpp files when one links object files instead.

                                                                                                      To me it was implied that you must create object files first.

                                                                                                      That’s what ninja does. It’s not new.

                                                                                                      It doesn’t claim to be new, but IIRC Ninja does not use -M in the way described in the article. The article suggests using -M to verify that only explicitly depended upon header-files are included, resulting in fewer undefined reference errors.

                                                                                                      1. 1

                                                                                                        They could be clearer, yes.

                                                                                                        Not clearer; they could be correct.

                                                                                                        To me it was implied that you must create object files first.

                                                                                                        How? This is a quote: “Undefined references occur when you depend on a header, but not on the corresponding translation-unit(s).”

                                                                                                        It doesn’t claim to be new, but IIRC Ninja does not use -M in the way described in the article. The article suggests using -M to verify that only explicitly depended upon header-files are included, resulting in fewer undefined reference errors.

                                                                                                        Including a header doesn’t result in undefined reference errors unless:

                                                                                                        • The functions declared in the header are actually called
                                                                                                        • Because of global variables that aren’t defined anywhere
                                                                                                    1. 15

                                                                                                      I’ve worked remote for the last three years, and my take on is vastly different than yours.

                                                                                                      • I don’t go to an office or co-working space. I like saving time on commuting – even though I did enjoy my 10km cycle to work at my last job – and it’s much more distraction-free, especially since most offices/co-working spaces are effectively open office plans, and thus productivity-killers.

                                                                                                        I also enjoy the freedom it gives me (see next point).

                                                                                                      • I don’t have strict work hours, or a strict schedule, or anything of the sort. Basically, I work when I feel like it and don’t when I don’t feel like it. This may sound strange, but I found it works incredibly well. Sure, some weeks I don’t do much, but other weeks I work a lot and get a tremendous amount of work done, just because I feel like it.

                                                                                                        Not sticking to any sort of schedule is helpful, since I’ve got more time for other things in my less-productive weeks, I can maximize my productivity much more. Overall, I think I can fairly say that on the whole I’ve been one of the most productive people in the company in the last few years.

                                                                                                        Your schedule also sounds a lot like the “work for 8 hours in a block”-schedule. I don’t like that kind of schedule at all. I much prefer “work for a few hours, do the dishes, do some shopping, maybe meet up with a friend, do another hour of work, cook for girlfriend and have dinner, chat some, work for another two hours”-kind of schedule (again, this depends on what I feel like; some days I do work for 8 – or more – hours straight, but it’s not forced).

                                                                                                      • Distractions are not much more of an issue for me than when I worked in the office. People slack of in offices all the time: chatting at the coffee machine, browsing reddit, etc. Remote work isn’t all that different, IMHO.

                                                                                                      I don’t think there’s a one-size-fits all approach here; clearly your approach works well for you, so great! It’s just interesting that my approach is almost diametrically opposite.

                                                                                                      The other points (health, loneliness, hobby, fresh air, etc.) are more relatable for me.

                                                                                                      1. 6

                                                                                                        I’ve worked from home for the last 2 years and I’m right with you on basically all points.

                                                                                                        1. 6

                                                                                                          Some additional points from myself:

                                                                                                          • Buy noise reducing headphones or large over-ear headphones to remove noise coming from outside and other places.
                                                                                                          • Having a dedicated room does wonders. I used to work on the couch, kitchen table, etc. It used to take extreme concentration sometimes, and it actually becomes exhausting. A room dedicated to work takes this stress off.
                                                                                                          • Share things happening IRL with your remote co-workers. This helps with bonding.
                                                                                                          • Multi-monitor is nice but not necessary. I actually find it distracting but there are times it’s helpful.
                                                                                                          • I know it’s mentioned but want to emphasize: try to get out at lunch and set a hard time for stopping work. Otherwise you’ll be working until 8pm (or 1am) every day, which I’m sure is common among us.
                                                                                                          1. 2

                                                                                                            I never use headphones, except for calls. I like to hear what’s happening around me in the offices around me. I should note that I have a big office just for myself and there is nobody who could bother me.

                                                                                                            1. 4

                                                                                                              Heh. I mean nothing too serious when I say this, but somewhere, some time ago, someone mentioned how developers kind of go into an “autistic state”, where we have no choice but to tune out everything and have extreme focus. I feel music helps me get into this state. But it also takes time to come out of this state. I generally feel extremely emotionless as well and time is nothing.

                                                                                                              Man our profession is something else.

                                                                                                              1. 3

                                                                                                                Depicting it as an ‘autistic state’ is not very charitable IMO. It’s just a flow state, and it is sometimes required to really understand a system.

                                                                                                                It can be extremely draining but it is a tool to employ when necessary.

                                                                                                                1. 4

                                                                                                                  is not very charitable

                                                                                                                  I know, but the way they described it (me just paraphrasing), the possible relation is interesting and makes me think it is plausible.

                                                                                                                  That is also why I worded it the way I did. There is also this general idea autism is a horrible thing or a joke. It is such a wide spectrum. Here I guess I’m talking specifically about being “obsessive” and lacking emotion. These are good traits for any skilled worker.

                                                                                                                  I know many people with autism and autistic tendencies and ultimately to me they are just people wired differently, that’s all, and have their own dis/advantages.

                                                                                                                2. 2

                                                                                                                  To indirection: I’m with mattgreenrocks on using flow. There’s even a great book that popularized it called Peopleware. If you haven’t read it, buy and read it soon as possible. Then pass it on. :)

                                                                                                                  To everyone: Well, unless there’s someone better aimed at developers that covers the same stuff about team dynamics and especially flow. Is Peopleware still the best in its class or no? Probably should also collect free resources online covering all those points in relatively-short, accessible writing. I do see bits and pieces show up in submissions.

                                                                                                                  Also, searching for it showed me there was a more recent update (3rd edition). I think the lessons are timeless enough that I’m safe in recommending the older version for its cheaper, used copies. Folks not worried about that might wonder if newer one is better, though. Is it?

                                                                                                                  1. 2

                                                                                                                    Using the term flow will absolutely be more acceptable, heh. I just mentioned what I did here because I trust lobste.rs will see I have zero malicious intent behind my words, and curious to hear if anyone has heard this…idea, or whatever we want to call it.

                                                                                                                    1. 1

                                                                                                                      Oh, I gave you benefit of the doubt. I was pushing you toward a standard term and a resource you might like. :)

                                                                                                            2. 5

                                                                                                              Wow, you have different work ethics than me. I like yours, but I love to spend time with my family in the afternoon and the evening. Your post reminded me of a company where I worked before. Some employees would prefer to arrive at work way later than me. I was always the first one in the office. But I was also the first one to leave. I guess this hasn’t changed.