1. 10

    Reading this, in my mind I was defending C++ because there are reasons behind each of the things the post criticizes. For instance std:: because bringing everything in std into the global namespace can result in interesting errors. The nightmares of C++ initialization brought on by decades of backwards compatibility while attempting to reach a uniform way. Why emplace_back is useful compared to push_back.

    But then I went back and looked over the original post and code, and pondered things like how e.path().c_str() invokes e.path().native() which returns a wstring on Windows, meaning the code does not compile. How the for loop that iterates over the word to convert it to lowercase is undefined behavior if any character is non-ascii (when auto results in c having type char). Or that emplace_back does the same as push_back in this case because there is no constructor to forward arguments for in-place construction to.

    I am not trying to pick at Jussi’s code here, you can take example code from most blog posts and nitpick at the details which are largely irrelevant to the message. Rather, it made me more aware of the amount of details you have to consider when writing C++ code.

    In my opinion, starting with C++14 it is slowly becoming easier and easier to write good C++ code, but that does not take away all the complexity left by 30 years of backwards compatibility on heterogeneous hardware. It is easy to criticize this complexity if you have the luxury of only having to support the three major platforms of today and can assume things like little-endian, two’s complement, and UTF-8.

    1. 2

      As the article mentions, the standard requires the input value be representable in an unsigned char, or EOF.

      Interestingly this has turned into a C++ footgun I’ve managed to fire in the past – using auto with a lambda in an STL algorithm working on a string can result in UB:

      count = std::count_if(s.begin(), s.end(), [](auto c) { return std::isalnum(c); });
      

      The problem is also mentioned on the cppreference page for isalnum.

      1. 3

        Considering that the short summary is generally limited to 50 characters, should punctuation be used there? I’ve seen mixed opinions on this and have been called out on putting a period at the end of my commit messages.

        1. 3

          With the caveat that you should follow the style of the repo you’re working on, I prefer to leave out the period at the end. If you ever use the e-mail based workflow, the first line is used as the subject, which you usually do not end with a period. Also git does not do it in the messages it generates for merges and such.

          1. 1

            Aha! So that’s why it’s also called the subject line.

            That’s a great analogy for what that line is for – too many people treat it as a paragraph.

        1. 5

          Alternatively, there is the nonstandard compound statement GNU extension which allows to control the “return value” of the macro.

          But you should think twice before using this because inline functions are often as fast as macros.

          1. 6

            And have been a part of standard C for 20 years (defined in C99).

            1. 4

              Perhaps worth noting that the inline keyword of the standard is a suggestion to the compiler – what happens is implementation defined. Usually the compiler knows when to inline, but for time-critical code most have some way to force it (like the __always_inline__ attribute or __forceinline).

          1. 1

            This is an interesting idea, thanks for posting it.
            Have you tried combining the reordered results with running Zopfli with a large number of trials?

            Edited to change comparing to combining.
            I had meant the latter but written the former.
            The child posts by @Boojum, @jibsen, and @hyperpape precede this edit.

            Edited again to add another child post in the preceding paragraph.

            1. 3

              Given what was done here and what I know of Zopfli, I wouldn’t expect them to be alternative optimizations to each other. Rather, I’d expect they’d be very complementary and that each would make the other more effective. I suspect you’ll get better results from combining them than each approach can give individually.

              1. 2

                Indeed, Gzip and Zopfli use the deflate compressed data format, which (along with entropy coding) looks for repeated sequences (matches) in a 32k sliding window.

                So if you imagine two identical small files (< 16k) inside your tar archive – if they are more than 32k apart, both Gzip and Zopfli have to compress the data twice. If on the other hand the two files are right next to each other, the entire second file can be very compactly encoded using matches into the first file.

              2. 1

                I didn’t. According to Wikipedia, zopfli gives 3%-8% better compression than gzip, so better than what I got here.

                This was fun, and I think I might keep playing around with the idea, so I might do that comparison in the future.

                1. 1

                  I didn’t. According to Wikipedia, zopfli gives 3%-8% better compression than gzip, so better than what I got here.

                  Don’t underrate your results - compression is a mature area and Zopfli is a state of the art algorithm.

                  The results of running Zopfli are heavily dependent on the input file and the number of random trials.
                  The file is tiny so you can easily run 500,000+ trials without a long delay.
                  I used Zopfli for image optimization years ago and have seen larger improvements, e.g. 10% or more.

              1. 4

                Archivers that have a solid option like RAR have used grouping files by extension for a long time.

                1. 30

                  Can someone please help me understand how the announcement of a Linux subsystem for Windows becoming generally available is spam?

                  Prior to this you needed to be running a Windows Insider build in order to run it. I’ve seen a thousand thousand distro announcement posted here without getting this kind of treatment.

                  What am I not understanding about the rules and guidelines for this community? Or are people just as downright nasty with the flagging as they seem to be to me?

                  1. 11

                    Could be that FOSS purists flagged it since WSL enables people to run Linux on a proprietary OS. (Which, if true, is a very silly reason IMO.)

                    I think WSL is great. It was a godsend on my previous job which was mostly a Windows kinda place. Before that I used Cygwin, but WSL is much much more convenient.

                    1. 8

                      I am a WSL user on some of my Windows systems, and I greatly appreciate it. That being said, a few points regarding “spam”:

                      • The linked website is Canonical; Canonical has had some controversy regarding contribution to FOSS (back in the day, the company was monetizing a FOSS-derived OS - from Debian, without giving back to the FOSS community). I can only assume that some of the readers saw the intention of Canonical here as to monetize piggybacking on the Microsoft’s WSL2 availability.
                      • There are multiple text-based advertising paragraphs in the blog post, which mention the enterprise/corporate offering of Canonical and WSL (paragraph 4, and the first paragraph, and the title of the last section)
                      • The article (May 2020) mentions availability of an O/S in the WSL store. A tutorial (which is mostly generic, applies to almost any WSL2 migration/distro) is also present on the blog post. Ubuntu fails to mention what exactly is the WSL2 OS good for, how does it compare to other distros (or to the main release), what does it bring new (except enterprise support).

                      Now, some of the above may have not triggered some members of the community. I’m human, so I’m mostly biased, and while I haven’t flagged or hidden this particular story, I can imagine myself having a bad day and flagging an article from a company monetizing FOSS without contributing back as spam (see Google, Amazon).

                      1. 13

                        I can imagine myself having a bad day and flagging an article from a company monetizing FOSS without contributing back as spam (see Google, Amazon).

                        That would be working against the site guidelines. Lobste.rs has always been a place for technical discussion, how that tech is monetized or if its proprietary, GPL or BSD licensed and how the company/individuals contribute back / if at all should have no impact if a story is on topic or not.

                        Judge the content, not the person/entity behind it.

                        1. 2

                          Now, some of the above may have not triggered some members of the community. I’m human, so I’m mostly biased, and while I haven’t flagged or hidden this particular story, I can imagine myself having a bad day and flagging an article from a company monetizing FOSS without contributing back as spam (see Google, Amazon).

                          Right, and you yourself seem to agree that doing so is a mis-use of the flag mechanism.

                          The appropriate response would be either a constructive comment or maybe even if you cared to / had time a private message around how the article lacks technical merit and probably doesn’t belong here, accompanied by a simple non upvote.

                          Or maybe we need a new flag “lacks technical merit” :)

                        2. 7

                          What am I not understanding about the rules and guidelines for this community? Or are people just as downright nasty with the flagging as they seem to be to me?

                          It’s definitely not just you. I’ve noticed frequent “flagging as spam” of late as well - just monitor Recent for a while and you’ll spot it immediately. The articles that are being flagged would certainly not have been flagged a year or two ago. Perhaps this warrants a wider discussion…

                          1. 5

                            Me too. I had a definitely-not-spam submission flagged the other day as well. I asked the mods to delete it; I don’t submit spam but hell if I wanted to offend anybody. As it turned out, others upvoted so it worked out okay.

                            But it left a bad taste in my mouth. I know we’re all supposed to assume positive intent, but as long as we’re not identifying anybody individually, I have a weird feeling that there’s something deliberately negative going on here. I don’t know what, but it doesn’t feel right. Content can be poor quality, bad advice, poorly-written, dated, or off-topic without it being anywhere near spam. In that case just don’t upvote it, or make a comment explaining what you think may be technically bad about the piece. (You know, you might be mistaken! I am mistaken quite a lot) I am concerned something’s not working as it should.

                            1. 2

                              It’s definitely not just you. I’ve noticed frequent “flagging as spam” of late as well - just monitor Recent for a while and you’ll spot it immediately. The articles that are being flagged would certainly not have been flagged a year or two ago. Perhaps this warrants a wider discussion…

                              I definitely think this warrants a wider discussion as well. Take a look at @gerikson’s comment above. There’s no bad intent there, but he’s using the SPAM flag as “I feel this article is lacking in technical merit”. He chose to un-flag as did the other person after I called the flagging choice into question, but I think we need to do some work as a community to come to a common understanding of what the flags are FOR and how we want to use them to make the community better.

                              1. 2

                                Yes, agreed. Taking a look at a recent story, How To Make Ubuntu Work Like Windows 10, it currently has a score of 5, made up of “+12, -2 off-topic, -5 spam”. Quite a mix, suggesting that there are some differing views about what posts are appropriate.

                                1. 3

                                  I also notice that the flag explanations link in the About page seems broken. I’m going to message the mods about that, might help people to understand the goals of the mechanism better.

                                  1. 2

                                    For what it’s worth I went and hunted down the explanation of what flags are for. “Spam” says “promotes a commercial service”. The explanation is in the middle of the “Ranking” section: https://lobste.rs/about

                                    1. 1

                                      I think the problem is that some people use “spam” as a catchall when flagged posts they think are inappropriate. I know that some people leave a comment when they do so, at least explaining their thinking, but they’re in the minority.

                              2. 7

                                I found it borderline, flagged it but I have since unflagged it.

                                What I’d like to see: a post that describes the differences between WSL1 and WSL2 and how it pertains to Ubuntu; why WSL2 is worth the update; what changes Ubuntu made to accomodate WSL2, etc.

                                Also what I’d like to see, what distribution (if any) is best for WSL ?

                                For Ubuntu, the more mindshare they have among WSL users, the better. So this entry can be seen as marketing.

                                Final edit I removed a bunch of mildly self-pitying and sarcastic remarks around this comment being flagged, but it looks now I was mistaken. I stand by my words above.

                                1. 2

                                  For Ubuntu, the more mindshare they have among WSL users, the better. So this entry can be seen as marketing.

                                  Final edit I removed a bunch of mildly self-pitying and sarcastic remarks around this comment being flagged, but it looks now I was mistaken. I stand by my words above.

                                  If you look at the description for the Spam flag it says: Promotes a commercial service.

                                  WSL is in fact a closed source proprietary commercial product sold by Microsoft. It’s certainly not a ‘service’ and I personally feel that while this article probably lacks technical merit, it’s probably not Spam under the current definition of the flag either.

                                  I actually thought long and hard before posting this, and what ultimately swayed me was the fact that WSL now being available to main line Windows 10 users not part of the Windows Insider program seemed like technical information that could be useful and interesting to the community here.

                                  So I guess the question for this community is - what do we want to be? If release announcements aren’t of interest because they lack the kind of deep technical content we want to see, then perhaps we should consider being clear about that.

                                  Anyway, lots of good discussion here. Thanks for taking part in it, and again thanks for explaining your motivations.

                                  1. 1

                                    a post that describes the differences between WSL1 and WSL2 and how it pertains to Ubuntu; why WSL2 is worth the update

                                    As I understood it, the main difference is that WSL1 was an emulation layer that translated system calls into the corresponding Windows API calls, whereas WSL2 is a (lightweight) VM running an actual Linux kernel.

                                    See: https://docs.microsoft.com/en-us/windows/wsl/compare-versions

                                    1. 1

                                      I actually installed WSL1 yesterday, and was mildly disappointed my computer is not yet updated to be able to handle WSL2.

                                      What I meant was I’d like Canonical to expand more on how they’ve worked (or not had to!) to work with WSL2.

                                      Edit apparently WSL2 is faster, which I appreciate. I’ve been using Cygwin before and reading and processing 10k files there was very fast, in WSL1 it’s painfully slow.

                                      1. 3

                                        Yes WSL1’s lackluster performance is well documented. It’s why the put so many engineering hours into creating WSL 2, and the results are impressive.

                                        Sorry your computer isn’t up to the 2004 version required - I know not everyone is just running Windows at home for themselves and may be locked down by IT or other constraints, but for those who do control their own systems the 2004 update is fully released and anyone can go grab it, it just hasn’t been queued for automatic deployment yet.

                                        Also I wanted to thank you for your cards up post around why you flagged and un-flagged the article as Spam. I think there’s a disconnect in this community around how flags are used. It seems to me that the mods created flags as a means of allowing the community to police and bar raise itself, but people are instead using them as you did as a way to say “This article lacks technical merit” or in some cases even “I disagree” which to me is an even more egregious abuse of the mechanism.

                                        1. 1

                                          I’ll have to check whether I can update to 2004… we’ve gotten new practices re: computer management from the mothership.

                                          I’ve actually been quite happy with Cygwin for my purposes but figured the WSL is The Future(tm) now.

                                          1. 1

                                            Interesting that you find Cygwin’s performance better than WSL1. I’ve used Cygwin off-an-on for many years but tend to prefer a Linux VM (via Vagrant) when forced to use Windows. In my limited usage I’ve found WSL1 to be a better experience than Cygwin, at least for shell interactive usage - I’ve not done any heavy processing with it.

                                            1. 1

                                              Glad to hear Cygwin meets your needs. It’s certainly battle tested!

                                              One of the things that others have cited in this thread that WSL brings to the table is official support from external tools like IDEs.

                                              I can write and debug my code in VSCode or Pycharm, and then deploy and debug in WSL because both tools explicitly have first class support for it.

                                              This is a pretty compelling feature for some of us.

                                          2. 2

                                            I actually installed WSL1 yesterday, and was mildly disappointed my computer is not yet updated to be able to handle WSL2.

                                            Dunno if you saw this most recent announcement. They backported it even further. Hope this helps!

                                            1. 2

                                              Thanks, I did see that! Unfortunately corporate policy still has me stuck on a version that’s too old…

                                      2. 2

                                        It’s a little sad to see this. Perhaps this kind of behaviour could be looked at and addressed? Il not sure what to call it, but it feels distasteful to me.

                                        1. 2

                                          I think it’s a simple matter of the community not having a good shared understanding of what the flag feature should be used for.

                                          My impression is that the moderators meant for it to be a relatively serious step that could be used to censure posts that are VERY far afield from the intent of the community, but instead people are using it for making statements like “I disagree”, “This lacks technical merit” or “This represents a commercial interest engaging in marketing” which I’d personally assert is part and parcel of “lacks technical merit”.

                                          I don’t know how we get the word out about this though without being too heavy handed.

                                          1. 2

                                            I see some comments with explanations already, which is great, as knowing why is going to be key in finding a better way to solicit the feedback that this is aiming for.

                                      1. 23

                                        Personally, I prefer the imperative for the subject line, and no period at the end. For some context:

                                        1. 8

                                          Ditto. I usually directly pass this around as it is actually quite recapitulatory: https://chris.beams.io/posts/git-commit/#seven-rules

                                        1. 1

                                          Some of those look like they could be in the IOCCC.

                                          Since it’s slightly related, someone might find a 16 byte bubblesort in x86 assembly amusing.

                                          1. 2

                                            I never understood the advantages of ninja with respect to make. It seems to boil down to things like that the makefiles do not use tab characters with semantic value, that the -j option is given by default, or that the syntax is simpler and slightly better. But apart from that, what are the essential improvements that would justify a change from make to ninja? If ninja is slightly better than GNU make, I tend to prefer GNU make that I know and that it is ubiquitous and it avoids a new build dependency.

                                            1. 14

                                              The article discusses how it’s really a low-level execution engine for build systems like CMake, Meson, and the Chrome build system (formerly gyp, now GN).

                                              So it’s much simpler than Make, faster than Make, and overlapping with the “bottom half” of Make. This sentence is a good summary of the problems with Make:

                                              Ninja’s closest relative is Make, which attempts to encompass all of this programmer-facing functionality (with globbing, variable expansions, substringing, functions, etc.) that resulted in a programming language that was too weak to express all the needed features (witness autotools) but still strong enough to let people write slow Makefiles. This is vaguely Greenspun’s tenth rule, which I strongly attempted to avoid in Ninja.

                                              FWIW as he also mentions in the article, Ninja is for big build problems, not necessarily small ones. The Android platform build system used to be written in 250K lines of GNU Make, using the “GNU Make Standard Library” (a third-party library), which as far as I remember used a Lisp-like encoding of Peano numbers for arithmetic …

                                              1. 6
                                                # ###########################################################################
                                                # ARITHMETIC LIBRARY
                                                # ###########################################################################
                                                
                                                # Integers a represented by lists with the equivalent number of x's.
                                                # For example the number 4 is x x x x. 
                                                
                                                # ----------------------------------------------------------------------------
                                                # Function:  int_decode
                                                # Arguments: 1: A number of x's representation
                                                # Returns:   Returns the integer for human consumption that is represented
                                                #            by the string of x's
                                                # ----------------------------------------------------------------------------
                                                int_decode = $(__gmsl_tr1)$(if $1,$(if $(call seq,$(word 1,$1),x),$(words $1),$1),0)
                                                
                                                # ----------------------------------------------------------------------------
                                                # Function:  int_encode
                                                # Arguments: 1: A number in human-readable integer form
                                                # Returns:   Returns the integer encoded as a string of x's
                                                # ----------------------------------------------------------------------------
                                                __int_encode = $(if $1,$(if $(call seq,$(words $(wordlist 1,$1,$2)),$1),$(wordlist 1,$1,$2),$(call __int_encode,$1,$(if $2,$2 $2,x))))
                                                __strip_leading_zero = $(if $1,$(if $(call seq,$(patsubst 0%,%,$1),$1),$1,$(call __strip_leading_zero,$(patsubst 0%,%,$1))),0)
                                                int_encode = $(__gmsl_tr1)$(call __int_encode,$(call __strip_leading_zero,$1))
                                                
                                                1. 3

                                                  Source, please? I can’t wait to see what other awful things it does.

                                                  1. 1

                                                    Yup exactly, although the representation looks flat, it uses recursion to turn 4 into x x x x! The __int_encode function is recursive.

                                                    It’s what you would do in Lisp if you didn’t have integers. You would make integers out of cons cells, and traverse them recursively.

                                                    So it’s more like literally Greenspun’s tenth rule, rather than “vaguely” !!!

                                                  2. 1

                                                    Yes, so I guess its main advantage is that it is really scalable. This is not a problem that I have ever experience, my largest project having two hundred files that compiled in a few seconds, and the time spent by make itself was negligible. On the other hand, for such a small project you may get to enjoy the ad-hoc GNU make features, like the implicit compilation .c -> .o, the usage of CFLAGS and LDFLAGS variables, and so on. You can often write a makefile in three or four lines that compiles your project; I guess with ninja you should be much more verbose and explicit.

                                                    1. 5

                                                      He mentions that the readme explicitly discourages people with small projects from using it.

                                                      I suspect it’s more that ninja could help you avoid having to add the whole disaster that is autotools to a make-based build rather than replacing make itself.

                                                      1. 2

                                                        I suspect it’s more that ninja could help you avoid having to add the whole disaster that is autotools to a make-based build rather than replacing make itself.

                                                        Sure; autotools is a complete disaster and a really sad thing (and the same thing can be said about cmake). For small projects with few and non-configurable dependencies, it is actually feasible to write a makefile that will compile seamlessly the same code on linux and macos. And, if you don’t care that windows users can compile it themselves, you can even cross-compile a binary for windows as a target for a compiler in linux.

                                                      2. 2

                                                        You don’t (or better, shouldn’t!) write Ninja build descriptions by hand. The whole idea is something like CMake generates what Ninja actually parses. I’ve written maybe 3 ninja backends by now.

                                                    2. 4

                                                      we use ninja in pytype, where we need to create a dependency tree of a project, and then process the files leaves-upwards with each node depending on the output of processing its children as inputs. this was originally done within pytype by traversing the tree a node at a time; when we wanted to parallelise it we decided to instead generate a ninja file and have it invoke a process on each file, figuring out what could be done in parallel.

                                                      we could doubtless have done the same thing in make with a bit of time and trouble, but ninja’s design decisions of separating the action graph out cleanly and of having the build files be easy to machine generate made the process painless.

                                                      1. 4

                                                        It’s faster. I am (often) on Windows, where the difference can feel substantial. The Meson site has some performance comparisons and mentions: “On desktop machines Ninja based build systems are 10-20% faster than Make based ones”.

                                                        1. 2

                                                          I use it in all my recent projects because it can parse cl.exe /showincludes

                                                          But generally like andyc already said, it’s just a really good implementation of make

                                                        1. 3

                                                          I usually have a ‘staging’ branch that I use for collecting smaller changes. The benefit is that I can push the staging branch to Git*b and have CI run on it without affecting master until I merge the changes there.

                                                          1. 2

                                                            Mine is called “dev”. Doing it for personal projects. This way master is always green in CI and I can test & force-push until CI is green. Also I can always apply single hot-fixes to master if something should come up.

                                                            1. 1

                                                              My bucket of bits branch name of abuse is wip. Or abuse of git stash if i’m exploring and aren’t sure its worth even committing into any history.

                                                          1. 1

                                                            I think Stroustrup’s quote about there only being two kinds of languages applies to build systems as well. I must admit I haven’t looked into xmake much yet, are there any larger projects using it?

                                                            There is an excellent talk by Jussi Pakkanen (Meson) from CppCon 2019 about the rabbit hole of complexity build systems have to handle.

                                                            1. 3

                                                              Yes, there are not many larger projects using it at present, because I didn’t do much to promote it.

                                                              But there are still some outstanding projects using xmake.

                                                              https://github.com/tboox/tbox https://github.com/idealvin/co https://github.com/acl-dev/acl … see more https://github.com/xmake-io/awesome-xmake/blob/master/README.md#projects

                                                            1. 1

                                                              Here are a few links to talks and other stuff on modern CMake I’ve enjoyed:

                                                              The modern CMake approach of thinking about targets and properties on them rather than modifying global state and hoping for the best improved my CMake experience a lot, that being said I usually choose Meson for my own small projects.

                                                              1. 13

                                                                I originally also suppressed this output on non-terminal devices, but then prog | less will still hang without a message, which is not great. I would encourage suppressing this output with a -q or -quiet flag.

                                                                STDIN might be a terminal while STDOUT or STDERR are not – you have different FDs, it is not a single STDIO device.

                                                                For example in C, you can test particular FDs this way:

                                                                #include <stdio.h>
                                                                #include <unistd.h>
                                                                
                                                                void check(int fd) {
                                                                	if (isatty(fd))  printf("FD %d is a terminal\n", fd);
                                                                	else             printf("FD %d is a file or a pipe\n", fd);
                                                                }
                                                                
                                                                void main() {
                                                                	check(fileno(stdin));
                                                                	check(fileno(stdout));
                                                                	check(fileno(stderr));
                                                                }
                                                                

                                                                Output:

                                                                $ make is-a-tty
                                                                cc     is-a-tty.c   -o is-a-tty
                                                                
                                                                $ ./is-a-tty 
                                                                FD 0 is a terminal
                                                                FD 1 is a terminal
                                                                FD 2 is a terminal
                                                                
                                                                $ ./is-a-tty | cat
                                                                FD 0 is a terminal
                                                                FD 1 is a file or a pipe
                                                                FD 2 is a terminal
                                                                
                                                                $ echo | ./is-a-tty
                                                                FD 0 is a file or a pipe
                                                                FD 1 is a terminal
                                                                FD 2 is a terminal
                                                                
                                                                $ echo | ./is-a-tty | cat
                                                                FD 0 is a file or a pipe
                                                                FD 1 is a file or a pipe
                                                                FD 2 is a terminal
                                                                
                                                                $ ./is-a-tty 2>/dev/null 
                                                                FD 0 is a terminal
                                                                FD 1 is a terminal
                                                                FD 2 is a file or a pipe
                                                                

                                                                I would not recommend messing the STDOUT/STDERR with superfluous messages if there is no error.

                                                                Indeed, there is Rule of Silence:

                                                                When a program has nothing surprising to say, it should say nothing.

                                                                Waiting for an input from a file or pipe is expected non-surprising behavior. Only when waiting for an input from the terminal, it could make sense to print some prompt or guide the user what he should do.

                                                                1. 2

                                                                  I forgot you can can run isatty() on stdin, too. Previously it did check this for stdout, but I removed his earlier (isTerm is the result of isatty(stdout)).

                                                                  I’ll update the program and article; thanks.

                                                                  1. 3

                                                                    isatty on stdin is good to test if your users made a mistake, and then isatty() again on stderr to make sure your users are reading your message!

                                                                  2. 1

                                                                    Strictly speaking, this is POSIX not C. isatty has been broken in the past on Windows with some distributions of GCC, I am unsure what the status is these days.

                                                                  1. 2

                                                                    I would probably go with the conversion matrices specified in the sRGB standard (IEC 61966-2-1:1999, also listed on the Wikipedia page for sRGB), to hopefully be consistent with other implementations.

                                                                    Edit: there is a PDF on w3.org that also lists the values from the standard, and which adds some information about normalization. I don’t know if this accounts for the difference between the values in the standard and those derived in the article here.

                                                                    1. 2

                                                                      I went through this a couple of months ago while designing the CIE polar color conversion in piet. What I found was that many sources had slightly different values for these matrices (and in particular I didn’t find Wikipedia reliable), so ended up going with the ones in the w3.org spec you linked. One way to validate these is that white ends up as white, to 6 decimal places, say.

                                                                      It’s also the case that an error in the 4th decimal place won’t be particularly visible, but it is evidence of taking care to get things right.

                                                                      ETA: here’s a link to the colab notebook I used to calculate this, including the verification of white point.

                                                                    1. 12

                                                                      Does your makefile support Windows at all?

                                                                      This seems like a non-issue – who builds on Windows outside of an environment like cygwin EDIT: who builds non-windows-first applications on windows using windows-specific build systems, rather than unix emulation layers? Supporting users of visual studio is a project in of itself, & while there are lots of Windows users around, there are very few who have a compiler installed or are liable to want to build anything from source. It makes more sense to do builds for windows via a cross-compiler & ask folks who want to build on windows to use cygwin – both of which are substantially less effort than getting VS to build an already-working project.

                                                                      1. 15

                                                                        I believe that’s your experience, but you and I have radically different experiences as Windows users.

                                                                        First, to be very blunt: Cygwin is truly awful. It needs to die. Cygwin is not a port of *nix tools to Windows; that’s MSYS. Cygwin is a really weird hacky port of a *nix to Windows. It’s basically WSL 0.0. It does not play well with native Windows tooling. It honestly doesn’t play well with Windows in general. And it’s comically slow, even compared to vaguely similar solutions such as WSL1. If I see a project that claims Windows support, and see Cygwin involved, I don’t even bother. And while I don’t know if a majority of devs feel similarly, a substantial enough group of Windows devs agree that I know my opinion’s not rare, either.

                                                                        You’re right that Visual Studio is the go-to IDE on Windows, in the same way that Xcode is on Mac. But just as Mac users don’t necessarily bust out Xcode for everything, Windows devs don’t necessarily bust out Visual Studio. Using nmake from the command line is old as dirt and still common (it’s how we build Factor on Windows, for instance), and I’ve seen mingw-based Windows projects that happily use cross-platform gnumake Makefiles. CMake is also common, and has the benefit that you can generate Visual Studio projects/solution when you want, and drive everything easily from the command line when you want. These and similar tools designed to be used without Visual Studio are heavily used enough and common enough that Microsoft continues to release the command-line-only Windows SDK for the most recent Windows 10–and they do that because plenty of devs really do only want that, not all of Visual Studio.

                                                                        For reasons you point out elsewhere, there’s a lot that goes into supporting Windows beyond the Makefile, to the point that concerns about cross-platform make may be moot, but “Windows devs will use Cygwin” seems reductionist.

                                                                        1. 5

                                                                          I don’t think windows devs use cygwin. I think that non-windows devs use cygwin (or mingw or one of the ten other unix-toolchain-for-windows environments) so that they don’t need to go through the hoops to become windows devs.

                                                                          In other words, I’m not really sure who the audience is for OP’s argument re: building on windows.

                                                                          If you’re building on windows & you are a windows dev, why care about make at all? If you’re building on windows & you are not a windows dev, why care about the first group at all? In my (dated & limited) experience these two ecosystems hardly interact & the tooling to make such interaction easier is mostly done by unix-first developers who want to check windows builds off the list with a minimum of effort.

                                                                          1. 3

                                                                            I think you need to take into consideration that there are also libraries. Sure, if you have an application developed on non-Windows, the easiest way to port it to Windows is building it in MSYS, with MinGW, or possibly Clang. But if you develop a library that you wish Windows developers be able to use in their projects, you have to support them building it with their tools, which is often MSVC.

                                                                        2. 8

                                                                          who builds on Windows outside of an environment like cygwin?

                                                                          I don’t understand this question. There are lots of software applications for Windows, each one has to be built, and cygwin is used really rarely. And CMake is precisely for supporting Visual Studio and gcc/clang at the same time, this is one of the purposes of the tool.

                                                                          1. 2

                                                                            In software applications that are only for windows, supporting unix make isn’t generally even on the table. Why would you, when a lot more than the build system would need to change to make a C or C++ program aimed at windows run on anything else?

                                                                            It only really makes sense to consider make for code on unix-like systems. It’s very easy to cross-compile code intended for unix-like systems to windows without actually buying a copy of windows, and it’s very easy for windows users to compile these things on windows using mechanisms to simulate a unix-like system, such as cygwin.

                                                                            There are a lot of alternative build systems around, including things like cmake and autotools that ultimately produce makefiles on unix systems. If your project actually needs these tools, there are probably design issues that need to be resolved (like overuse of unreliable third party dependencies). These build systems do a lot of very complicated things that developers ought not to depend upon build systems for, like generating files that play nice with visual studio.

                                                                            1. 2

                                                                              In software applications that are only for windows, supporting unix make isn’t generally even on the table.

                                                                              Every team I’ve been on which used C++ has used CMake or FASTBuild, so supporting Unix builds at some point isn’t off the table, and it makes builds a lot easier to duplicate and simplifies CI/CD. Every project I’ve seen with build configuration in a checked-in Visual Studio solution makes troubleshooting build issues a complete nightmare since diffs in the configs can be hard to read. CMake’s not great, but it’s one of the more commonly supported tools.

                                                                              If your project actually needs these tools, there are probably design issues that need to be resolved (like overuse of unreliable third party dependencies).

                                                                              I’m not sure how this logically follows.

                                                                              These build systems do a lot of very complicated things that developers ought not to depend upon build systems for, like generating files that play nice with visual studio.

                                                                              Using CMake (or something else which generates solution files for Visual Studio), provides developers options on how they want to work. If they want to develop on Linux with vim (or emacs), that’s fine. If they want to use CLion (Windows, Mac or Linux), that’s also fine. There really isn’t that much extra to do to support Visual Studio solution generation. Visual Studio has a fine debugger and despite many rough edges is a pretty decent tool.

                                                                          2. 2

                                                                            This seems like a non-issue – who builds on Windows outside of an environment like cygwin?

                                                                            Most Windows developers and cross-platform frameworks that I can tell.

                                                                            1. 4

                                                                              I should rephrase:

                                                                              Who builds cross-platform applications not originally developed on windows outside of an environment like cygwin?

                                                                              Windows developers don’t, as a rule, care about the portability concerns that windows-first development creates, & happily use tools that make windows development easier even when it makes portability harder. And cross-platform frameworks tend to do at least some work targeting these developers.

                                                                              But, although no doubt one could, I don’t think (say) GIMP and Audacity builds are done through VS. For something intended to be built with autotools+make, it’s a lot easier to cross-compile with winecc or build on windows with cygwin than to bring up an IDE with its own distinct build system – you can even integrate it with your normal build automation.

                                                                              1. 2

                                                                                I work on software that is compiled on Windows, Mac, and Linux, and is generally developed by people on Windows. We do not use Cygwin, which as gecko point out above, is truly awful. If I need to use Linux, I use WSL or a VirtualBox VM. And yes, I and my team absolutely care about portability, despite the fact that we develop primarily on Windows.

                                                                          1. 2

                                                                            Unrelated to the content, but that first example for getcpu() looks wrong. It is passing uninitialized pointers to getcpu() and then casting those (still uninitialized) pointers to int pointers (which are not int) in the calls to printf().

                                                                            1. 2

                                                                              The updates in the article contain most most of what I would comment, except perhaps that the fixed width integer types are optional (present if the implementation provides them).

                                                                              1. 4

                                                                                Like many things in C, it is fairly easy to write an implementation of a dynamic array, but not necessarily trivial to handle the corner cases.

                                                                                For instance, from a quick glance it appears none of the three examples linked here check if doubling the capacity will overflow. Two of them use an int to store the capacity, and overflowing that would be UB, and in all cases it could result in the allocated memory suddenly shrinking, introducing out of bounds memory accesses.

                                                                                1. 1

                                                                                  Now, I changed that from “int” to “unsigned”.

                                                                                  1. 1

                                                                                    Honestly I posted mine because I am stuck on some memory bugs in my implementation right now and I was hoping someone would point something out. The issue I’m currently having has to do with invalid checksum for freed object:

                                                                                    slowjs(1679,0x106c0e5c0) malloc: Incorrect checksum for freed object 0x7faf7a403168: probably modified after being freed.
                                                                                    

                                                                                    While the error occurs in the vector library I get the feeling it is a bug in code using the vector not in the vector itself.

                                                                                    That said I totally forgot about dealing with using unsigned ints and having a max int check. Will fix.

                                                                                  1. 8

                                                                                    For projects that are exactly C (not C++), I found check quite nice. It’s in C and has no C++ dependencies.

                                                                                    Here’s an example: https://github.com/vyos/ipaddrcheck/blob/current/tests/check_ipaddrcheck.c

                                                                                    1. 7

                                                                                      Another one that’s focused on C that I’ve enjoyed for several years now is greatest.

                                                                                      https://github.com/silentbicycle/greatest

                                                                                      I tend to use it over others because it exists in a single header file, so it’s easy to add to existing projects without fighting my build system for too long.

                                                                                      1. 5

                                                                                        Love greatest, it runs everywhere so you have no problems getting it to build on your CI.

                                                                                        Also used µnit sometimes, the reproducible random number generator can be great when a test fails.

                                                                                        1. 2

                                                                                          That looks great, with a similar API to gtest. Thanks for the tip!