Threads for GeoffWozniak

  1. 16

    It’s really hard to take an article like this seriously.

    The opening paragraph:

    This article deals with the use of bool in C++. Should we use it or not? That is the question we will try to answer here. However, this is more of an open discussion than a coding rule.

    So we are having an “open discussion” in an article whose title is definitive, denying the very idea of a discussion. Gotcha.

    Then, if you continue, the “discussion” is laughably simplistic and narrow. It trades Booleans for types with a design that is most generously described as questionable. The problem at the core is understanding the problem domain. If a function like shouldBuyHouse is taking parameters to describe all the variations, then trading Booleans for types isn’t going to solve much unless your domain is rudimentary, in which case Booleans are probably just fine.

    1. 2

      It’s really a question on the use of booleans in function signatures, not in general.

      If a function like shouldBuyHouse is taking parameters to describe all the variations, then trading Booleans for types isn’t going to solve much unless your domain is rudimentary, in which case Booleans are probably just fine.

      It’s definitely better if calling a function like shouldByHouse requires identifiers named hasPool/noPool versus positional true/false params. That is, shouldBuyHouse(hasPool, badLights) is definitely better than shouldBuyHouse(true, false). Right? No?

      1. 3

        It’s definitely better if the caller is passing a constant, as in your example.

        It gets awkward otherwise:

        shouldBuyHouse(buyerCanSwim == canSwim ? hasPool : doesntHavePool,
            buyerIsBlind ? badLights : goodLights)
        

        I love C++ enum classes, but the drawback is they have no implicit conversions to anything…

        Another way to look at the problem described here is that it’s a lack of parameter naming at the call site. Booleans are much clearer in shouldBuyHouse(hasPool: true, badLights: false). Right?

        1. 2

          I agree 100% - the lack of named arguments in c++ is mostly why we resort to using enum classes to represent boolean values. Unfortunately this trick only works for booleans and not so much for strings or other types, which is why we often resort to the builder pattern or similar idioms to simulate named arguments.

          1. 1

            Well, you probably wouldn’t express the mapping of buyerCanSwim to hasPool, or buyerIsBlind to whichLights, inline with the calling expression.

            poolRequirement = buyerCanSwim ? hasPool : doesntHavePool
            lightRequirement = buyerIsBlind ? badLights : goodLights
            purchaseDecision = shouldBuyHouse(poolRequirement, lightRequirement)
            

            Named parameters have appeal, but they tend to go hand-in-hand with optionality, which is a huge source of risk, IME far more costly than the benefits delivered by the feature. If you could have named parameters without the option of leaving any out, then I’d agree that’s a good solution, except that it still allows callers to call shouldBuyHouse(true, false) which is exactly the thing we’re trying to prevent, here, by construction. So I dunno. I wouldn’t put them in my language.

            1. 2

              I think Swift’s approach is quite interesting: https://docs.swift.org/swift-book/LanguageGuide/Functions.html

              Parameter names are encouraged by the syntax and non-optional for the caller, and must be provided in definition order. There’s a syntax to say “caller doesn’t need to provide a name here” but the caller needs to provide the argument names in basic looking signatures like shouldBuyHouse(hasPool: bool, badLights: bool) -> bool.

              Swift does have default parameters though, which are as bad or worse than optional parameters - but at least they need to be trailing, and order is guaranteed.

          2. 2

            When it’s a function with two parameters backing an uncompelling example, then I argue it still mostly doesn’t matter, and if you want those identifiers could be globals or even macros, if you’re using C.

            I’m not advocating for Booleans as arguments in general, I’m just asking for better writing. This is article is not worth anyone’s time.

            1. 3

              If the two parameters you mention are both booleans, then for readability it definitely matters whether you use a bool or an enum. The tradeoff is some extra work for the enum, and it’s reasonable to argue whether (or not) the extra work is worthwhile.

              One bonus the enum gives you is you can more easily replace it with a policy object.

        1. 19

          This implementation is not nearly pure GNU Make enough for me. :)

          Here’s one that is pure Make. Some of it is cribbed from The GNU Make Book and not all the definitions are necessary. Numbers are represented by a list of xs. So 5 would be the string x x x x x.

          It counts down from 100 instead of up. I don’t really care.

          _pow2 = $(if $1,$(foreach a,x x,$(call _pow2,$(wordlist 2,$(words $1),$1))),x x)
          _all := $(call _pow2,1 2 3 4 5 6 7 8) 
          num = $(words $1)
          val = $(wordlist 1,$1,$(_all))
          incr = x $1
          decr = $(wordlist 2,$(words $1),$1)
          
          max = $(subst xx,x,$(join $1,$2))
          min = $(subst xx,x,$(filter xx,$(join $1,$2)))
          
          eq = $(filter $(words $1),$(words $2))
          ne = $(filter-out $(words $1),$(words $2))
          gt = $(filter-out $(words $2),$(words $(call max,$1,$2)))
          gte = $(call gt,$1,$2)$(call eq,$1,$2)
          lt = $(filter-out $(words $1),$(words $(call max,$1,$2)))
          lte = $(call lt,$1,$2)$(call eq,$1,$2)
          
          add = $1 $2
          subtract = $(if $(call gte,$1,$2),$(filter-out xx,$(join $1,$2)),$(warning Underflow))
          multiply = $(foreach a,$1,$2)
          divide = $(if $(call gte,$1,$2),x $(call divide,$(call subtract,$1,$2),$2),)
          mod = $(if $(call gte,$1,$2),$(call mod,$(call subtract,$1,$2),$2),$1)
          
          fizz = $(call eq,$(call val,0),$(call mod,$1,$(call val,3)))
          buzz = $(call eq,$(call val,0),$(call mod,$1,$(call val,5)))
          fizzbuzz = $(and $(call fizz,$1),$(call buzz,$1))
          
          fbcheck = $(if $(call fizzbuzz,$1),$(info FizzBuzz),\
                      $(if $(call fizz,$1),$(info Fizz),\
                         $(if $(call buzz,$1),$(info Buzz),\
                             $(info $(call num,$1)))))
          
          loop = $(if $1,$(call fbcheck,$1)$(call loop,$(call decr,$1)),)
          
          all: ; @echo $(call loop,$(call val,100))
          
          1. 7

            This implementation is not nearly pure GNU Make enough for me. :)

            That’s the spirit :-) I saw a blog post about doing this kind of arithmetic in make but didn’t want to go this far down the rabbit hole. I’m glad you did though.

            1. 4

              Comments and articles like these are why I love lobste.rs :D

              1. 3

                Wow. Some things people were not meant to know.

                This note implies GNU make is Turing complete.

                https://okmij.org/ftp/Computation/#Makefile-functional

              1. 16

                This is an excellent resource! I worked on a feed reader from 2003-2007, and broken feeds were a constant annoyance. A lot of this seemed to be caused by generating the feed using the same template engine as the HTML but not taking account of the fact that it’s supposed to be XML

                I hope the situation is better now, but the major mistakes I saw then were:

                • Invalid XML, probably caused by sloppy code generating it. People get used to sloppy HTML because browsers are forgiving, but XML is stricter and doesn’t allow improper nesting or unquoted attributes.
                • HTML content embedded unquoted in the XML. This can be done legally, but the content has to be valid XHTML, else it breaks the feed. If in doubt, wrap CDATA around your HTML.
                • Incorrectly escaped text. It’s not hard to XML-escape text, but people managed to screw it up. Get it wrong one way and it breaks the XML; or you can double-escape and then users will see garbage like “"” in titles and content.
                • Bad text encoding. Not declaring the encoding and making us guess! Stating one encoding but using another! An especially “fun” prank was to use UTF-8 for most of the feed but have the content be something else like ISO-8859.
                • Badly-formatted dates. This was a whole subcategory … using the wrong date format, or localized month names, or omitting the time zone, or other more creative mistakes.
                • Not using entry UUIDs and then changing the article URLs. Caused lots of complaints like “the reader marked all the articles unread again!”
                • Serving the feed as dynamic content without a Last-Modified or Etag header. Not technically a mistake, but hurts performance on both sides due to extra bandwidth and the time to generate and parse.

                Fortunately you can detect nearly all these by running the feed through a validator. Do this any time you edit your generator code/template.

                For anyone wanting to write a feed reader: you’ll definitely want something like libTidy, which can take “tag soup” and turn it into squeaky clean markup. Obviously important for the XML, but also for article HTML if you plan on embedding it inside a web page — otherwise errors like missing close tags can destroy the formatting of the enclosing page. LibTidy also improves security by stripping potentially dangerous stuff like scripts.

                The one thing in this article I disagree with is the suggestion to use CSS to style article content. It’s bad aesthetically because your articles will often be shown next to articles from other feeds, and if every article has its own fonts and colors it looks like a mess. Also, I think most readers will just strip all CSS (we did) because there are terrible namespace problems when mixing unrelated style sheets on the same page.

                PS: For anyone doing research on historical tech flame wars, out-of-control bikeshedding, and worst-case scenarios of open data format design — the “feed wars” of the early/mid Oughts are something to look at. Someone (Mark Pilgrim?) once identified no less than eleven different incompatible versions of RSS, some of which didn’t even have their own version numbers because D*ve W*ner used to like to make changes to the RSS 2.0 “spec” (and I use that term loosely) without bumping the version.

                1. 5

                  Not using entry UUIDs and then changing the article URLs. Caused lots of complaints like “the reader marked all the articles unread again!”

                  I have unsubscribed from certain blogs because of this. It’s no fun when they keep “posting” the last 10 articles all the time…

                  1. 1

                    It drives me mad when I occasionally update my feeds and suddenly have tens, or hundreds (!) of “new” articles.

                    Doesn’t happen often enough that I’d want to delete the feed, but still very annoying.

                  2. 3

                    Someone (Mark Pilgrim?) once identified no less than eleven different incompatible versions of RSS,

                    I suspect this is because of intense commitment to the robustness principle (Postel’s Law). Tim Bray rebutted Dave Winer and Aaron Swartz’s frankly goofy devotion to this idea. I think it’s better to follow Bray’s advice.

                    1. 6

                      Actually it was Pilgrim and Aaron Schwartz he was rebutting in that blog post, not Winer.

                      And the 11-different-versions madness had nothing to do with liberal parsers, but with custody battles, shoehorning very different formats under the same name (RDF vs non-RDF), Winer’s allergy to writing a clear detailed spec or at least versioning his changes to it, and various other people’s ego trips.

                      In my experience, writing a liberal parser was a necessity because large and important feed publishers were among those serving broken feeds, and when your client breaks on a feed, users blame you, said users including your employer’s marketing department. Web browsers have always been pretty liberal for this reason.

                      1. 1

                        Actually it was Pilgrim and Aaron Schwartz he was rebutting in that blog post, not Winer.

                        Oh, right. Typed the wrong name there. Not gonna go back and edit it, though.

                    2. 2

                      There’s one good alternative to using UUIDs: tag URIs. They have one benefit over UUIDs: they’re human readable.

                      I remember the feed wars! Winer’s petulance caused so much damage. I haven’t used anything but Atom since then for anything I publish, and I advise people to give the various flavours of RSS a wide berth.

                    1. 4

                      “ As a user, you can force allow zooming”

                      Isn’t this problem solved, then?

                      1. 21

                        No. Just because there’s an option to enable it, that doesn’t mean disabling it should be encouraged. Not everyone knows about the option, for one thing.

                        1. 10

                          You’ve identified a web browser UI design problem, which can be solved by the probably-less-than-double-digits number of teams developing popular web browsers, rather than by asking tens of millions of web content creators to change their behavior.

                          1. 5

                            Perhaps browser makers can treat it like a potentially undesirable thing. Similar to “(site) wants to know your location. Allow/Block” or “(site) tried to open a pop up. [Open it]”

                            So: “(site) is trying to disable zooming. [Agree to Disable] [Never agree]” or similar.

                          2. 8

                            I think the better question is why can you disable this in the first place. It shouldn’t be possible to disable accessibility features, as website authors have time and time again proven to make the wrong decisions when given such capabilities.

                            1. 3

                              I mean, what’s an accessibility feature? Everything, roughly, is an accessibility feature for someone. CSS lets you set a font for your document. People with dyslexia may prefer to use a system font that is set as Dyslexie. Should it not be ok to provide a stylesheet that will override system preferences (unless the proper settings are chosen on the client)?

                              1. 3

                                Slippery slope fallacies aren’t really productive. There’s a pretty clear definition of the usual accessibility features, such as being able to zoom in or meta data to aid screen readers. Developers should only be able to aid such features, not outright disable them.

                                1. 6

                                  I think this is a misunderstanding of what “accessibility” means. It’s not about making things usable for a specific set of abilities and disabilities. It’s about making things usable for ALL users. Color, font, size, audio or visual modality, language, whatever. It’s all accessibility.

                                2. 1

                                  https://xkcd.com/1172/

                                  (That said, I don’t understand why browsers let sites disable zoom at all.)

                              2. 6

                                Hi. Partially blind user here - I, for one, can’t figure out how to do this in Safari on IOS.

                                1. 3

                                  “Based on some quick tests by me and friendly people on Twitter, Safari seems to ignore maximum-scale=1 and user-scalable=no, which is great”

                                  I think what the author is asking for is already accomplished on Safari. If it isn’t, then the author has not made a clear ask to the millions of people they are speaking to.

                                  1. 4

                                    I am a web dev dilettante / newbie, so I will take your word for it. I just know that more and more web pages make browsing them with my crazy pants busted eyes are becoming nearly impossible to view on mobile, or wildly difficult enough so as to be equivalent to impossible in any case :)

                                    1. 4

                                      And that is a huge accessibility problem. This zoom setting is a huge accessibility problem.

                                      My point is that the solution to this accessibility problem (and almost all accessibility problems) is to make the browser ignore this setting, not to ask tens of millions of fallible humans to update literally trillions of web pages.

                                      1. 4

                                        As another partially blind person, I fully agree with you. Expecting millions of developers and designers to be fully responsible for accessibility is just unrealistic; the platforms and development tools should be doing more to automatically take care of this. Maybe if the web wasn’t such a “wild west” environment where lots of developers roll their own implementations of things that should be standardized, then this wouldn’t be such a problem.

                                        1. 2

                                          Agreed. Front end development is only 50% coding. The rest is design, encompassing UX, encompassing human factors, encompassing accessibility. You can’t apply an “I’m just a programmer” or “works on my machine” mindset when your code is running on someone else’s computer.

                                          1. 2

                                            Developers and designers do have to be responsible for accessibility. I’m not suggesting that we aren’t.

                                            But very often, the accessibility ask is either “Hey, Millions of people, don’t do this” or “Hey, three people, let me ignore it when millions of people do this”. And you’re much better off lobbying the three people that control the web browsers to either always, or via setting, ignore the problem.

                                1. 2

                                  It’s not the use of sed, it’s the use of -i. Oh man, be so careful if you use that option.

                                  1. 1

                                    Using find . -exec sed is dangerous in a git repo

                                    Yes, agreed. I started programming with Perl, and one pet peeve that has stuck with me is when people use the -i flag without specifying a backup for potential recovery. It’s a terrible practice.

                                  1. 1

                                    What is the point of that paper exactly?

                                    1. 4

                                      To point out that existing protocols for IDEs (or whatever) to talk to language services are insufficient and to propose one way to address it: by reconfiguring the IDE automatically to account for the different protocols that are required to talk to the services for the language.

                                      1. 3

                                        I think the abstract provides a clear summary.

                                      1. 3

                                        Does the corresponding libstdc++ release support std::format? That’s the only modern C++ feature that I’ve missed on platforms where GCC is the system compiler.

                                        1. 2

                                          No.

                                        1. 20

                                          Warning: this is not supposed to be taken very seriously. It’s not a joke, but I won’t bet 2 cents that I’m right about any of it.

                                          Pretty much all widely used languages today have a thing. Having a thing is not, by far, the only determinant factor in whether a language succeeds, and you can even question whether wide adoption is such a good measure of success. But the fact is, pretty much all languages we know and use professionally have a thing, or indeed, multiple things:

                                          • Python has simplicity, and later, Django, and later even, data science
                                          • Ruby has Rails and developer happiness (whatever that means)
                                          • Go had simplicity (same name, but a different thing than Python’s) and concurrency (and Google, but I don’t think that it counts as a thing)
                                          • PHP had web, and, arguably, Apache and cheap hosts
                                          • JavaScript has the browser
                                          • Typescript has the browser, but with types
                                          • Java had the JVM (write once, run everywhere), and then enterprise
                                          • C# had Java, but Microsoft, and then Java, but better
                                          • Rust has memory safety even in the presence of threads

                                          Even older languages like SQL, Fortran, Cobol, they all had a thing. I can’t see what Hare’s thing might be. And to be fair, it’s not a problem exclusively with, or specially represented by, Hare. 9/10 times, when there’s a post anywhere about a new language, it has no thing. None. It’s not even that is not actually particularly well suited for it’s thing, it can’t even articulate what it’s thing is.

                                          “Well, Hare’s thing is system’s programming” that’s like saying that McDonald’s thing is hamburgers. A thing is more than a niche. It’s … well, it’s a thing.

                                          It might well be the case that you can only see a thing in retrospect (I feel like that might be the case with Python, for instance), but still, feels like it’s missing, and not not only here.

                                          1. 3

                                            It might well be the case that you can only see a thing in retrospect

                                            Considering how many false starts Java had, there was an obvious and error-ridden search process to locate the thing—first delivering portability, mainly for the benefit of Sun installations nobody actually had, then delivering web applets, which ran intolerably poorly on the machines people needed them to run on, and then as a mobile device framework that was, again, a very poor match for the limited hardware of the era, before finding a niche in enterprise web platforms. Ironically, I think Sun did an excellent job of identifying platforms in need of a thing, seemingly without realizing that their thing was a piss-poor contender for being the thing in that niche. If it weren’t for Sun relentlessly searching for something for Java to do, I don’t think it would have gotten anywhere simply on its merits.

                                            feels like it’s missing

                                            I agree, but I also think it’s a day old, and Ruby was around for years before Rails. Although I would say that Ruby’s creator did so out of a desire for certain affordances that were kind of unavailable from other systems of the time—a Smalltalk placed solidly in the Perl-Unix universe rather than isolated in a Smalltalk image. What we seem to have here is a very small itch (Zig with a simpler compiler?) being scratched very intensely.

                                            1. 2

                                              Ruby and Python were in the back of my mind the whole time I was writing the thing about things (hehe), and you have a point about Java, that thing flailed around A LOT before settling down. Very small itch is a good summary.

                                              Time will tell, but I ain’t betting on it.

                                              1. 1

                                                I’m with you. But we’ll see, I guess.

                                            2. 3

                                              Pretty much all widely used languages today have a thing. […] Even older languages like SQL, Fortran, Cobol, they all had a thing

                                              An obvious language you do not mention is C. What’s C’s thing in that framework? And why couldn’t Hare’s thing be “C, but better”, like C# is to Java? (Or arguably C++ is to C, or Zig is to C)

                                              1. 12

                                                C’s thing was Unix.

                                                1. 4

                                                  Incorrect…C’s thing was being a portable less terrible macroassembler-ish tool.

                                                2. 3

                                                  Well, I did say a thing is not the only determinant for widespread adoption. I don’t think C had a thing when it became widely used. Maybe portability? It was the wild wild west days, though.

                                                  Hare could very well eat C’s lunch and became big. But being possible is far away from being likely.

                                                  1. 2

                                                    C’s thing is that it’s a human-friendly assembly.

                                                    strcpy is rep stosb, va_list is a stack parser, etc.

                                                    1. 5

                                                      But it’s not. At least not once you turn on optimizations. This is a belief people have that makes C seem friendlier and lower level, but there have been any number of articles posted here about the complex transformations between C and assembly.

                                                      (Heck, even assembly isn’t really describing what the CPU actually does, not when there’s pipelining and multiprocessing going on.)

                                                      1. 2

                                                        But it is. Sure, you can tell the compiler to optimize, in which case all bets are obviously off, but it doesn’t negate the fact that C is the only mainstream high-level language that gets you as close to the machine language as it gets.

                                                        That’s not a belief, it’s a fact.

                                                        1. 4

                                                          you can tell the compiler to optimize, in which case all bets are obviously off

                                                          …and since all deployed code is optimized, I’m not sure what your point is.

                                                          Any modern C compiler is basically humoring you, taking your code as a rough guideline of what to do, but reordering and inlining and unrolling and constant-folding, etc.

                                                          And then the CPU chip gets involved, and even the machine instructions aren’t the truth of what’s really going on in the hardware. Especially in x86, where the instruction set is just an interpreted language that gets heavily transformed into micro-ops you never see.

                                                          If you really want to feel like your C code tells the machine exactly what to do, consider getting one of those cute retro boards based on a Z-80 or 8086 and run some 1980s C compiler on it.

                                                          1. -1

                                                            No need to lecture and patronize if you don’t get the point.

                                                            C was built around machine code, with literally every language construct derived from a subset of the latter and nothing else. It still remains true to that spirit. If you see a piece of C code, you can still make a reasonable guess to what it roughly translates to. Even if it’s unrolled, inlined or even trimmed. In comparison with other languages, where “a += b” or “x = y” may translate into the pages of binary.

                                                            Do you understand the point?

                                                            1. 2

                                                              C Is Not a Low-level Language

                                                              The post you’re replying to isn’t patronizing you, it’s telling the truth.

                                                              1. 2

                                                                You are missing the point just the same.

                                                                It’s not that C generates an exact assembly you’d expect, it’s that there’s a cap on what it can generate from a given piece of code you are currently looking at. “x = y” is a memcpy at worst and a dereferencing a pointer does more or less just that. Not the case with C++, leave alone Go, D, etc.

                                                                1. 1

                                                                  I suggest reading an intro to compilers class textbook. Compilers do basic optimizations like liveliness analysis, dead store eliminations etc. Just because you write down “x = y” doesn’t mean the compiler will respect it and keep the load/store in your binary.

                                                                  1. -1

                                                                    I suggest trying to make a rudimentary effort to understand what others are saying before dispensing advice that implies they are dolts.

                                                              2. 2

                                                                If you see a piece of C code, you can still make a reasonable guess to what it roughly translates to.

                                                                As someone who works on a C compiler for their day job and deals with customer support around this sort of thing, I can assure you this is not true.

                                                                1. 2

                                                                  See my reply to epilys.

                                                                  Can you share an example of resulting code not being even roughly what one was expecting?

                                                                  1. 4

                                                                    Some general observations. I don’t have specific examples handy and I’m not going to spend the time to conjure them up for what is already a thread that is too deep.

                                                                    • At -O0 there are many loads and stores generated that are not expected. This is because the compiler is playing it safe and accessing everything from the stack. Customers generally don’t expect that and some complain that the code is “stupid”.
                                                                    • At -O1 and above, lots of code gets moved around, especially when inlining and hoisting code out of loops. Non-obvious loop invariants and loops that have on effect on memory (because the user forgot a volatile) regularly result in bug reports saying the compiler is broken. In nearly every case, the user expects all the code they wrote to be there in the order they wrote it, with all the function calls in the same order. This is rarely the case.
                                                                    • Interrupt code will be removed sometimes because it is not called anywhere. The user often forgets to tag a function as an interrupt and just assumes everything they write will be in the binary.
                                                                    • Our customers program microcontrollers. They sometimes need timing requirements on functions, but make the assumption that the compiler will generate the code they expect to get the exact timing requirements they need. This is a bad assumption. They may think that a series of reads and writes from memory will result in a nearly 1-1 correspondence of load/stores, but the compiler may realize that because things are aligned properly, it can be done in a single load-multiple/store-multiple instruction pair.
                                                                    • People often expect one statement to map to a contiguous region of instructions. When optimizaton is turned on, that’s not true in many cases. The start and end of something as “simple” as x = y can be very far apart.

                                                                    This is just from recent memory. There is really no end it. I won’t get into the “this instruction sequence takes too many cycles” reports as those don’t seem to match your desired criteria.

                                                                    1. 1

                                                                      Thanks. These are still more or less in the ballpark of what’s expected with optimizations on.

                                                                      I’ve run into at least a couple of these, but I can remember only one case when it was critical and required switching optimizations off to get what we needed from the compiler (had to with handling very small packets in the nic driver). Did kill a day on that though.

                                                  1. 4

                                                    This a long-standing discussion. You may want to read Paul Prescod’s XML is not S-expressions to see why this is subtley wrong and not really the best of ideas.

                                                    1. 2

                                                      Thank you for sharing that link, that was the first time I’ve noticed someone speaking out against using S-expressions for HTML/XML. I’d disagree and say that S-expressions are easier to work with and there exist something like Scribble for documentation in Racket.

                                                      1. 1

                                                        There are also strong mappings like SXML

                                                      2. 2

                                                        I’ve been generating HTML from S-expressions for decades and I’ve got to say none of the downsides mentioned in the article have ever come up. Being able to use tools like paredit for authoring documents is a huge leap over the editing tools I’ve tried that are available for HTML.

                                                        Granted I’m typically using an s-expression flavor which allows for key/value data structures with {} since I do kind of agree that forcing associative data into parens isn’t very clear.

                                                        The XML one defaults to treating random characters as text, not as markup.

                                                        This might be considered an advantage when you’re just manually authoring documents directly, but when you’re writing application code that emits HTML (which is the case in the original article) this property is very bad.

                                                        In the S-expression version, the software does not know that there is a missing parenthesis until it gets to the end of the document. […] Smart editing tools can help but the cannot solve the problem.

                                                        Incorrect; paredit and other structural editors completely solve this.

                                                        the canonical documentation of the Scheme and Lisp standards is maintained not in S-expression syntax but in LaTeX

                                                        This is a simple case of bootstrapping; you can’t define something using the thing that’s in the process of being defined. Obviously bootstrapped lisps do exist, you always have to have some starting point for a bootstrap.

                                                        DTDs, RELAX and XML Schema define constraints on individual instances of XML documents.

                                                        I guess Clojure spec didn’t exist when this document was written, but it does now.

                                                        1. 1

                                                          Not having to escape quotes in XML was the killer feature for me over Sexps.

                                                          (Well, that and inline markup.)

                                                        1. 13

                                                          I’m quite skeptical of the real world value of 24bit color in a terminal at all, but the biggest problem I have with most terminal colors is they don’t know what the background is. So they really must be fully user configurable - not just turn on/off, but also select your own foreground/background pairs - and this is easier to do with a more limited palette anyway.

                                                          I kinda wish that instead of terminal emulators going down the 24 bit path, they instead actually defined some kind of more generic yet standardized semantic palette entries for applications to use and for users to configure once and done to get across all applications.

                                                          alas.

                                                          1. 4

                                                            I’m quite skeptical of the real world value of 24bit color in a terminal at all

                                                            I have similar misgivings, but I admit to liking the result of 24-bit colour. It’s useful! I just don’t like how it gets there.

                                                            Something that is a never-ending source of problems with the addition of terminal colours in the output of utilities these days is that in almost every case they are optimized for dark mode. I don’t use, nor can I stand, dark mode. It is horrible to read. But as a result, the colour output from the tools is unreadable. bat is the most recent one I tried. I ran it on a small C file and I literally couldn’t read most of the output.

                                                            Yes, you can configure them but when they are useless out-of-the-box, the incentive is very low to want to configure everything. And then, I could just… not configure them and use the standard ones that are still just fine.

                                                            Terminal colours are really useful. I find 24-bit colour Emacs in a terminal pretty nice. It’s the exception. Most other modern terminal tools that produce colour output don’t work for me because they can’t take into account my current setup.

                                                            Having standard colour pallettes that the tools could access would be much better.

                                                            1. 4

                                                              I’ve started polling my small sample size of students and they almost unanimously prefer dark mode. I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                              Personally I prefer dark because I have a lot of floaters in my eyes that are distracting with light backgrounds. For many years I had to change the defaults to dark.

                                                              That said, I like to be able to toggle back and forth between light and dark. When I’m outside in the sun, or using a projector, light mode is critical. This is made difficult by every tool using their own color palette rather than the terminal’s. Some tools can be configured to do so, and maybe that should be their default.

                                                              1. 5

                                                                I suspect this is most people’s preferred which is why it’s the default of most tools.

                                                                Back when I was in undergrad (~25 years ago), light mode was what everyone used. Then again, it was always on a CRT monitor and was the default for xterms everywhere. If you got a dark theme happening, it attracted some attention because you knew what you were doing. People did it to show off a bit. (I did it too!)

                                                                Then I got older and found dark backgrounds remarkably difficult to read from. I haven’t used them for well over 15 years. I simply cannot read comfortably on a such colour schemes, which why I have to use reader view or the zap colours bookmarklet all the time.

                                                                I’m not saying dark mode is bad, but I am saying it’s probably trendy. I suspect things will swing in a different direction eventually, especially as the eyes of those who love it now get older. (They inevitably get worse! Be ready for it.) So the default will likely change. In which case, maybe we should really consider not hard-baking colour schemes into tools and move the colour schemes to somewhere else, as you mention. This is the better way to go. As I mention elsewhere in the thread, configuring bat, rg, exa, and all these modern tools individually is just obnoxious. Factor the colour schemes out of the tools somehow. It’s a better solution in the long run.

                                                                1. 1

                                                                  I too find light displays easier to read.

                                                                  From memory, the first time I heard of TCO-approved screens was when Fujitsu(?) introduced a CRT screen with high resolution, a white screen, and crisp black text. This was considered more legible and more ergonomic.

                                                                  (TCO is Tjänstemännens Centralorganisation, the main coordinating body of Swedish white-collar unions. Ensuring a good working environment for their members is a core mission.)

                                                                  1. 2

                                                                    What I find helps the most is reducing the blue light levels - stuff like f.lux works well.

                                                                    I’m also looking into e-ink monitors, but damn, they’re pricey.

                                                              2. 3

                                                                Yeah, I’m a fan of light mode (specifically white backgrounds) on screen most the time too, and actually found colors so bad that’s a big reason why I wrote my own terminal emulator. Just changing the palette by itself wasn’t enough, I actually wanted it to adjust based on dynamic background too (so say, an application tries to print blue on black, my terminal will choose a different “blue” than if it was blue on white, having the terminal emulator itself do this meant it would apply to all applications without reconfiguration, it would apply if i did screen -d -r from a white screen to a black screen (since the emulator knows the background, unlike the applications!), and would apply even if the application specifically printed blue on black since that just drives me nuts and I see no need to respect an application that doesn’t respect my eyes).

                                                                A little thing, but it has brought me great joy. Even stock ls would print a green i found so hard to read on white. And now my thing adjusts green and yellow on white too!

                                                                Whenever I see someone else advertising their new terminal emulator, I don’t look for yet another GPU render. I look to see what they did with colors and scrollback controls.

                                                                1. 2

                                                                  I got fed up with this and decided to do something about it, so after what felt like endless fiddling and colorspace conversions, I have a color scheme that pretty much succeeds at making everything legible, in both light and dark mode. It achieves this by

                                                                  • Deriving color values from the L*C*h* color space to maximize the human-perceived color difference.
                                                                  • Assigning tuned color values as a function of logical color (0-15), whether it’s used for foreground or background, and whether it’s for dark or light mode.
                                                                  • Assigning the default fg/bg colors explicitly as a 17th logical color, distinguished from the 16 colors assignable by escape sequences.

                                                                  As a result, I can even read black-on-black and white-on-white text with some difficulty.

                                                                  Here it is: https://github.com/svenssonaxel/st/blob/master/config.h#L117-L207

                                                                  1. 2

                                                                    I had the same problem with bat so I contributed 8-bit color schemes for it: ansi, base16, and base16-256. The ansi one is limited to the 8 basic ANSI colors (well really 6, since it uses the default foreground instead of black/white so that it works on dark and light terminals), while the base16 ones follow the base16 palette.

                                                                    Put export BAT_THEME=ansi in your .profile and bat should look okay in any terminal theme.

                                                                    1. 2

                                                                      As I said, I could set the theme, but my point was that I don’t want to be setting themes for all these things. That’s maintenance work I don’t need.

                                                                      1. 1

                                                                        I definitely agree that defaulting to 24 bit colour is a terrible choice for command line tools, but when it’s a single environment variable to fix, I do think some (bat) are worth the minor, one-off inconvenience.

                                                                  2. 3

                                                                    I agree 100%. I think the closest thing we have to a standardized semantic palette is the base16 palette. It’s a bit confusing because it’s designed for GUI software too, not just terminals, so there are two levels of indirection, e.g. base16 0x8 = ANSI 1 = red-ish. It works great for the first eight ANSI colors:

                                                                    base16  ANSI  meaning
                                                                    ======  ====  ==========
                                                                    0x0     0     background
                                                                    0x8     1     red-ish/error
                                                                    0xb     2     green-ish/success
                                                                    0xa     3     yellow-ish
                                                                    0xd     4     blue-ish
                                                                    0xe     5     violet-ish
                                                                    0xc     6     cyan-ish
                                                                    0x5     7     foreground
                                                                    

                                                                    The other 8 colors are mostly monochrome shades. You need these for lighter text (e.g. comments), background highlights (e.g. selections), and other things. The regular base16 themes place these in ANSI slots 8-15, which are supposed to be the bright colors, which breaks programs that assume those slots have the bright colors.

                                                                    The base16-256 variants copy slots 1-6 into 9-14 (i.e. bright colors look the same as non-bright, which is at least readable), and then puts the other base16 colors into 16-21. It recommends doing this maneuver with base16-shell, which IMO defeats the purpose of base16. base16-shell is just a hack to get around the fact that most terminal emulators don’t let you configure all the palette slots directly; kitty does, so I use my own base16-kitty theme to do that, and use base16-256 for vim, bat, fish, etc. without base16-shell.

                                                                  1. 4

                                                                    To add debugging symbols when compiling with gcc add the -g flag.

                                                                    If you work with a code base that has a lot of macro definitions, consider increasing the level of debug info by using -g3, which will include macros.

                                                                    Relevant docs:

                                                                    1. 1

                                                                      Something I found hilarious is that a variable might be used on lines 200-210 and then again on line 8544. No where else.

                                                                      This sounds like pretty much every significant code base I have ever had to work on. It is not unusual.

                                                                      1. 4

                                                                        Bear in mind this is for AIX targets. Some of the advice about how many instructions it takes to initialize things is not universal.

                                                                        1. 4

                                                                          Yeah, that threw me for a loop. Does AIX only run on POWER CPUs?

                                                                          Also, some of the advice seems kind of archaic (like, talking about local and global variables as those are the only kinds, i.e. no such thing as heap allocation or pointers) or obvious (who would use a C string as an enum? Even a series of #defines as in their “good” example is pretty archaic compared to an enum.)

                                                                          1. 1

                                                                            Just to nitpick here, global vs local, heap vs stack and pointer vs non-pointer are all describing different, orthogonal attributes of a variable, specifically scope, allocation strategy, and type. In each case global/local, heap/stack, and pointer/non-pointer really are the only options (in C) for those attributes.

                                                                        1. 17

                                                                          Its package ecosystem is in excellent condition and packages such as org-mode and eglot / lsp-mode make even the most demanding programming languages a joy to work with in Emacs.

                                                                          I work on a large C/C++ codebase as part of my day job and use lsp-mode/eglot (currently eglot) to navigate the code, with very few extensions. I also use the latest mainline Emacs with native compilation. I have been using Emacs for over 25 years and my customization is best categorized as “very light”. In short, my Emacs set up is not much beyond what ships with it.

                                                                          And it’s still just… slow. GCC has some pretty big files and opening them can take up to 10 seconds thanks to font-lock mode. (Yes, I know I can configure it to be less decorative, but I find that decoration useful.) It’s much worse when you open a file that is the output from preprocessor expansion (easily 20000+ lines in many cases).

                                                                          Log files that are hundreds of megabytes are pretty much a guaranteed way to bring Emacs to a crawl. Incremental search in such a buffer is just painful, even if you M-x find-file-literally.

                                                                          I had to turn off nearly everything in lsp-mode/eglot because it does nothing but delay my input. I can start typing and it will be 3-4 characters behind as it tries to find all the completions I’m not asking for. Company, flymake, eldoc are all intolerably slow when working with my codebase, and I have turned them all off or not installed them in the first place.

                                                                          M-x term is really useful, but do not attempt to run something that will produce a lot of output to the terminal. It is near intolerable. Literally orders of magnitude slower to display than an xterm or any other terminal emulator. (M-x eterm is no better.)

                                                                          The problem, of course, is that Elisp is simply not performant. At all. It’s wonderfully malleable and horribly slow. It’s been this way since I started using it. I had hopes for native compilation, but I’ve been running it for a few months now and it’s still bad. I love Emacs for text editing and will continue to use it. I tried to make it a “lifestyle choice” for a while and realized it’s not a good one if I don’t want to be frustrated all the time. Emacs never seems to feel fast, despite the fast hardware I run it on.

                                                                          1. 6

                                                                            The performance was the reason for me to leave Emacs. I was an evil mode user anyways so the complete switch to (neo)Vim was simple for me. I just could not accept the slowness of Emacs when in Vim everything is instant.

                                                                            E.g. Magit is always named as one of the prime benefits of Emacs. While its functionality is truly amazing its performance is not. Working on a large code base and repository I was sometimes waiting minutes! for a view to open.

                                                                            1. 3

                                                                              What did you find slow on Emacs aside from Magit?

                                                                              I actually use Emacs because I found it really fast compared to other options. For example, the notmuch email client is really quick on massive mailboxes.

                                                                              Some packages might be slow, though. I think the trick is to have a minimal configuration with very well chosen packages. I am particularly interested in performance because my machine is really humble (an old NUC with a slow SATA disk).

                                                                              1. 2

                                                                                To be fair it was some time ago and I don’t remember all the details but using LSPs for code completion/inspection was pretty slow e.g.

                                                                                Compared to IDEs it might not even have been slow but similar. I however have to compare to Vim where I have equal capabilities but pretty much everything is instant.

                                                                                My machine was BTW pretty good hardware.

                                                                                1. 1

                                                                                  lsp-mode became much more efficient during the last year or so. Eglot is even more lightweight, I think. Perhaps it is worth giving it another go.

                                                                                  I think there was some initial resistance to LSP in the Emacs community and therefore they were not given the attention they deserve.

                                                                                  1. 2

                                                                                    Thanks for the notice! I may try it again in the future but currently I am very happy with my Neovim setup, which took me a long time to setup/tweak :)

                                                                              2. 2

                                                                                Out of curiosity, were you using Magit on Windows?

                                                                                I use Magit every day and my main machine is very slow. (1.5GHz 4 core cortex A53) Magit never struck me as particularly slow, but I’ve heard that on Windows where launching subprocesses takes longer it’s a different story.

                                                                                1. 3

                                                                                  but I’ve heard that on Windows where launching subprocesses takes longer

                                                                                  Ohh you have no idea how slow in a corporate environment. Going through MSYS2, Windows defender, with windows being windows and a corporate security system on top, it takes… ages. git add a single file? 20 seconds. Create a commit? Over a minute. It’s bonkers if you hit the worst case just right. (On a private Windows install, MSYS2 + Exceptions set in Windows Defender it’s fine though, not much slower as my FreeBSD laptop) I asked around and there is a company wide, hardcoded path on every laptop, that has exceptions in all the security systems just to make life less miserable for programmers. Doesn’t solve it completly, but helps.

                                                                                  Either wait an eternity or make a mokery of the security concept. Suffice to say I stopped using Windows and Cross-Compile from now on.

                                                                                  1. 1

                                                                                    Can confirm. I use Magit on both Linux and Windows, and it takes quite a bit of patience on Windows.

                                                                                    1. 1

                                                                                      With Windows I think it’s it’s particularly git that is slow, and magit spawns git repeatedly. It used also to be very slow on Mac OS as well because of problems with fork performance. On linux, it used to be slow with tramp. There are some tuning suggestions for all of these in the magit manual I think.

                                                                                      1. 1

                                                                                        Nope on Linux. As mentioned our code base is big and has many branches etc. Not sure where exactly Magit’s bottleneck was. It was quite some time ago. I just remember that I found similar reports online and no real solution to them.

                                                                                        I now use Lazygit when I need something more than git cli and it’s a fantastic tool for my purpose. I also can use it from within Vim.

                                                                                      2. 1

                                                                                        Working on a large code base and repository I was sometimes waiting minutes! for a view to open.

                                                                                        This happens for me as well with large changes. I really like Magit but when there are a lot of files it’s nearly unusable. You literally wait for minutes for it to show you an update.

                                                                                      3. 4

                                                                                        I know you’re not looking to customise much but wrt. terminals, vterm is a lot better in that regard.

                                                                                        1. 1

                                                                                          I actually switched to M-x shell because I found the line/char mode entry in term-mode to be annoying (and it seems vterm is the same in this respect). shell-mode has all the same slowness of term-mode, of course. I’ve found doing terminal emulation in Emacs to be a lost cause and have given up on it after all these years. I think shell-mode is probably the most usable since it’s more like M-x shell-command than a terminal (and that’s really its best use case).

                                                                                          1. 1

                                                                                            If you need ansi/curses there’s no good answer and while I like term it was too slow in the end and I left. I do think that for “just” using a shell that eshell is fine though.

                                                                                        2. 3

                                                                                          Do you use the jit branch of emacs? I found once I switched to that and it had jit compiled things my emacs isn’t “fast” but its pretty boring now in that what used to be slow is now at least performant enough for me not to care.

                                                                                          1. 2

                                                                                            Is there a brew recipe or instructions on compiling on Mac? Or does checking out the source and running make do the business?

                                                                                            1. 3

                                                                                              I use the emacs-plus1 package. it compiles the version you specify. currently using emacs-plus@29 with --with-native-comp for native compilation, and probably some other flags.

                                                                                              1. 2

                                                                                                Thanks again, this is appreciably faster and I’m very pleased 😃

                                                                                                1. 2

                                                                                                  Awesome! also, check out pixel-scroll-precision-mode for the sexiest pixel-by-pixel scrolling. seems to be a little buggy in info-mode, can’t replicate with emacs -Q though, so YMMV.

                                                                                                2. 1

                                                                                                  Thank you that sounds perfect

                                                                                                3. 1

                                                                                                  I’m a Mac user and I found it very hard to compile Emacs.

                                                                                                  This might be a good starting point however:

                                                                                                  https://github.com/railwaycat/homebrew-emacsmacport

                                                                                                  1. 1

                                                                                                    I honestly don’t know I use nix+home-manager to manage my setup on macos, this is all I did to make it work across nixos/darwin:

                                                                                                    Added it as a flake input: https://github.com/mitchty/nix/blob/7e75d7373e79163f665d7951829d59485e1efbe2/flake.nix#L42-L45

                                                                                                    Then added the overlay nixpkgs setup: https://github.com/mitchty/nix/blob/7e75d7373e79163f665d7951829d59485e1efbe2/flake.nix#L84-L87

                                                                                                    Then just used it like so: https://github.com/mitchty/nix/blob/6fd1eaa12bbee80b6e80f78320e930d859234cd4/home/default.nix#L87-L90

                                                                                                    I gotta convert more of my config over but that was enough to build it and get my existing ~/.emacs.d working with it and speedy to the point I don’t care about emacs slowness even on macos anymore.

                                                                                                  2. 1

                                                                                                    Do you use the jit branch of emacs?

                                                                                                    Yes. I’ve been using the libgccjit/native compilation version for some time now.

                                                                                                  3. 2

                                                                                                    The problem, of course, is that Elisp is simply not performant.

                                                                                                    That’s half of it. Another half is that, IIRC, Emacs has rather poor support for asynchrony: most of elisp that runs actually blocks UI.

                                                                                                    1. 1

                                                                                                      In short, my Emacs set up is not much beyond what ships with it.

                                                                                                      Can share your config? I’m curious to know how minimal you made it.

                                                                                                      1. 1

                                                                                                        Here you go. It changes a little bit here and there with some experiments.The packages I currently have installed and use are: which-key, fic-mode, counsel, smartparens, magit, and solarized-theme. There may be a few others that I was trying out or are only installed for some language support (markdown, yaml, and so forth).

                                                                                                        1. 1

                                                                                                          Thank you very much.

                                                                                                        2. 1

                                                                                                          Quick addendum on the config: that’s my personal config, which morphs into my work setup. My work one actually turns off flymake and eldoc when using eglot.

                                                                                                        3. 1

                                                                                                          Is there anything that has prevented a Neovim-style rewrite of Emacs? A Neomacs?

                                                                                                          I keep hearing about the byzantine C-layer of Emacs and the slowness of Elisp. And Emacs definitely has the community size to develop such an alternative. Why do you think no one has attempted such an effort? Or maybe I should add “recently” to the question. As I know there are other Emacs implementations.

                                                                                                          1. 4

                                                                                                            As crusty as Emacs source can be, it’s nowhere near as bad Vim source was, which was a rat’s nest of #ifdef. That’s why Neovim had to basically rewrite their way to a fork. The Emacs implementation is surprisingly clean, as long as you can tolerate some of the aged decisions (and GNU bracing).

                                                                                                            1. 2

                                                                                                              There is Climacs, which isn’t exactly the same, but is close.

                                                                                                              The problem for any new Emacs clone will that it has to run all the Elisp out there. Unless there is a substantial speed improvement to Elisp or a very good automatic translation tool, any such project will be doomed from the start.

                                                                                                          1. 6

                                                                                                            Having enjoyed SICP immensely and craving a good book on “advanced” programming techniques to hone my skills, I really wanted to like this book, but I too came away from it disappointed.

                                                                                                            Like OP says, the code examples can be glossed over because they fail to really illuminate the reason behind doing things this way, and they’re even incomplete - major chunks of implementation are not in the book (but presumably they are part of the downloadable package - I didn’t care enough to download it). The parts that are in the books are simply rather tedious to read through as well. And they are overly mathy/physics-oriented (like some of the worse parts of SICP) which doesn’t really fit with daily practice of most software engineers.

                                                                                                            Most of the techniques in the book are useful in a variety of situations, granted. But, like OP indicates, most of these you’ve probably already encountered as features in a programming language or in an ad-hoc implementation for a project. Or you may even have invented some of them independently by yourself. There’s nothing really new here.

                                                                                                            Finally, I’m not so sure these techniques will help you prevent painting yourself into a corner - in fact, overuse of these techniques will lead to an inscrutable mess of code. There’s a certain type of disconnect in your code when you make things “extensible” at any point. This will completely obscure the flow of control, making it harder to understand what’s going on and harder to pinpoint the source of a bug. With multiple “layers” contributing to a final result, the bug could be in a layer, or even in the resulting interplay between the layers. Thankfully, the authors do warn for this, but not enough IMO.

                                                                                                            1. 4

                                                                                                              in fact, overuse of these techniques will lead to an inscrutable mess of code.

                                                                                                              Very much so. I left a whole bunch of stuff out of the review going on about this. It’s mind-boggling that it is not addressed in the book that is ostensibly about “design”. I have implemented a few systems with extensibility and, at best, only a small number of users actually made use of them. For most, the abstractions were confusing. Dynamic/static typing makes no difference here: it can be a maze of macros or a tower of types. In either case, you’re probably going to end up with a mess.

                                                                                                            1. 5

                                                                                                              It would be hard for me to agree more with the final paragraph. This book is full of powerful ideas, and yet I found it to be largely out-of-touch with the everyday practitioner. Shame that the authors couldn’t bridge the gap.

                                                                                                              1. 3

                                                                                                                This was easily the most frustrating thing about the book. When I finished it, I thought about what Guy Steele said in the foreward, and the blurbs on the back from people like Rich Hickey and thought, “did these people actually read this?” Because it is simply not good, despite the fact the techniques are really good to know. (And Curtis Hanson spent a bunch of time working at Google, so there is certainly industry experience behind the authorship.)

                                                                                                                1. 4

                                                                                                                  Well, Guy L. Steele is probably a personal friend of the authors (with him having co-written the Lambda Papers with Sussman), and if a friend asks you to write a foreword to their book it’ll be hard to say no. And of course, you’re not gonna badmouth a book in the book’s foreword.

                                                                                                                  And Rich Hickey certainly likes the ideas in the book, as some of them (the ones he specifically calls out - metadata and dynamic dispatch) are built-ins in Clojure, and he’s big on an “additive” style of programming in general.

                                                                                                                  1. 1

                                                                                                                    I think the book describes some great ideas, but should have gone through many more iterations to be a worthy successor to SICP. The expectations were simply too high.

                                                                                                                    I guess SICP, like many other great academic books, matured slowly by starting as some rough lecture notes and getting more polished year after year, before publication.

                                                                                                                1. 3

                                                                                                                  This is the fundemental idea behind what if? — start small, with code that passes some conditions. Then question yourself on edgecases.

                                                                                                                  This, to me, is just the natural way to program. And it’s how I’ve been coding for over two decades. Sitting down to think about the problem beforehand in this way also prevents you from writing a whole bunch of tests or other code that you’re just going to throw away.

                                                                                                                  1. 11

                                                                                                                    Posts about society aren’t being removed because they are mis-tagged, but because they’re off topic. It would make sense to me not to add tags to cover off topic areas.

                                                                                                                    (And conversely, just because you can post something under an extant tag doesn’t mean it’s on topic)

                                                                                                                    1. 1

                                                                                                                      Right I’m suggesting we have the tag and that such posts be on topic under that tag

                                                                                                                      1. 4

                                                                                                                        And we’re saying we don’t want them.

                                                                                                                        1. 5

                                                                                                                          I think you’re missing what is being said. “On topic” != “tagged correctly”.

                                                                                                                          Such articles would still be off topic, fundamentally, no matter how they were tagged. It’s a subject we don’t cover on this site.

                                                                                                                          If there was a badminton tag, badminton articles would still be off topic. Make sense?

                                                                                                                          You can read more about this site here: https://lobste.rs/about

                                                                                                                          1. 2

                                                                                                                            Yes but it could be on topic if we chose as a community to make it on topic.

                                                                                                                            1. 10

                                                                                                                              I think it’s pretty clear the community does not want it to be on topic. In the past, discussions around these sorts of topics have devolved into name calling and bad-faith arguments with practically no redeemable content. Arguably, lobste.rs-style link aggregation with commenting and light-touch moderation is a bad format for such discussions, regardless of the site. It hasn’t worked here in the past and there is little to suggest it will in the future.

                                                                                                                              It would be better for such things to have a different site, perhaps a sister one.

                                                                                                                      1. 1

                                                                                                                        I feel sad that this approach isn’t the default, but maybe I’m just old. My web page started as static HTML served by Apache in the 1990’s. Today I’m using Route53, an S3 bucket, and Cloudfront to do https, and I generate the page with Hakyll (because I set it up that way some years back and I’m not putting the energy into changing).

                                                                                                                        The favicon trick is nice, though. Have to add that…

                                                                                                                        1. 1

                                                                                                                          I think for it to really be popular we need a big centralized service to push it. Too many non-technical uses that don’t want to (and shouldn’t) learn about DNS and git. OTOH for technical users this is pretty darn easy, and showing that is the main reason I published this

                                                                                                                          I’ll probably be switching to Hakyll at some point, just to have more excuses to keep my tiny sliver of Haskell knowledge going :)

                                                                                                                          1. 2

                                                                                                                            I have thought about building such a service that just proxies setting all that up and providing a CyberDuck configuration that lets people upload to their bucket. That’s more or less what I do for a few people now, but they pay me once a year and I take care of the AWS bills and maintenance based on that.

                                                                                                                            1. 2

                                                                                                                              […] that don’t want to (and shouldn’t) learn about DNS and git. OTOH for technical users this is pretty darn easy, […]

                                                                                                                              As they say on Wikipedia, “citation needed”.

                                                                                                                              1. 1

                                                                                                                                There are two statements there. Which one?

                                                                                                                                And they’re both just like, my opinion man.

                                                                                                                          1. 2

                                                                                                                            Is it just me or does the test suite seem woefully inadequate?

                                                                                                                            Also, using C++ to implement a core C library feels a bit like the snake that eats its own tail.

                                                                                                                            1. 1

                                                                                                                              Also, using C++ to implement a core C library feels a bit like the snake that eats its own tail.

                                                                                                                              LLVM’s libc is also using C++. The requirements for a language implementing libc are:

                                                                                                                              • Must not depend on anything that libc provides (at least, in the parts that depend on that).
                                                                                                                              • Must be able to export C symbols.

                                                                                                                              C++ meets these requirements as does Rust with nostd. A lot of the things in libc end up being macros that provide error-prone implementations of C++ templates. For example, qsort, qsort_r and qsort_b are all exactly the same algorithm, with minor tweaks to how they invoke the callbacks. Some things, such as bits of locale state, need atomic reference counting. You can implement these in C with explicit calls to incref and decref functions but using C++ smart pointers makes it almost impossible to get wrong.

                                                                                                                              I’ve worked a reasonable amount on things in libc implementations and a significant fraction of that time has involved thinking ‘this would be much less code and easier to audit if I wrote it in C++’.

                                                                                                                              We’ve implemented malloc in C++ and that’s one of the lowest-level parts of libc. It’s about half the size of jemalloc (which is written in C) and performs better.

                                                                                                                              1. 1

                                                                                                                                I suppose I say this because I work in the embedded world, where C++ for C libs doesn’t fly.

                                                                                                                                1. 1

                                                                                                                                  I don’t really buy that argument. C++ can generate at least as small code as C (there was a great talk someone linked to here using modern C++17 features to compile for a 6502 and generating code as good as hand-written assembly). The only embedded programming that I’ve done has been on things like the Cortex-M0 and the SDKs supported C++ out of the box and let me write high-level generic code that compiled down to a few hundred bytes of object code when instantiated for my specific SoC. Mind you, they were freestanding environments and so didn’t actually have a libc.

                                                                                                                                  There are only two reason that I wouldn’t use C++ in an embedded context. The first isthe lack of a C++ compiler. That still happens for some of the more esoteric ISAs but it isn’t a problem for any M-profile Arm cores and 16 KiB of RAM is plenty for C++. The other is if I’m right at the constrained end of the spectrum (things with on the order of hundreds of bytes of instruction memory, 1 KiB of data memory) and there I wouldn’t use C either, I’d use assembly, because any language that assumes a stack would be a problem (though the 6502 talk I mentioned above relied on inlining to completely eliminate the stack, so C++ might even be feasible there).

                                                                                                                                  You do generally need to disable exceptions and RTTI for embedded C++ work (but you often do that even on large systems). You also need to think about your use of templates, to ensure that you’re not bloating your code, but you need to do the same with C macros and C++ gives you tools like non-templated base classes for common logic that make this kind of thing easier. C++ also makes it easy to force certain bits of code to be evaluated at compile time (giving a compile failure if they aren’t), which means that you’re less at the whim of optimisers for code size than with C. Most of the C codebases I’ve seen that need this end up with something horrible like Python code that generates a C data structure or even C code, whereas in C++ everything can be in the same language.

                                                                                                                                  C++ is far from a perfect language but it’s a much better language than C for anything that C is used for.

                                                                                                                              2. 1

                                                                                                                                First, C++ is overall a better language than C. Second, libc is in no way a “core” library: it is a compatibility layer. For example, consider position of CRT in Windows. Third, Managarm kernel is written in C++, so it is natural to use the same in userland too.

                                                                                                                                1. 1

                                                                                                                                  Second, libc is in no way a “core” library: it is a compatibility layer.

                                                                                                                                  This assumes you have an operating system.