1. 24

    I agree with most of what’s said in the article. However, it misses a bit the forest for the trees, at least considering the introduction. Programs that take seconds to load or display a list are not just ignoring cache optimizations. They’re using slow languages (or language implementations, for the pedantics out there) like cpython where even the simplest operations require a dictionary lookup, or using layers and layers of abstractions like electron, or making http requests for the most trivial things (I suspect it’s what makes slack slow; I know it’s what makes GCP’s web UI absolutely terrible). A lot of bad architectural choices, too.

    Cache optimizations can be important but only as the last step. There’s a lot to be fixed before that, imho.

    1. 16

      Even beyond than that, I think there are more more baseline things going on: Most developers don’t even benchmark or profile. In my experience the most egregious performance problems I’ve seen have been straight-up bugs, and they don’t get caught because nobody’s testing. And the profiler basically never agrees with what I would have guessed the problem was. I don’t disagree with the author’s overall point, but it’s rare to come across a program that’s slow enough to be a problem that doesn’t have much lower hanging fruit than locality issues.

      1. 3

        I agree so much! I’d even say that profiling is one half of the problem (statistical profiling, that is, like perf). The other half is tracing, which nowadays can be done with very convenient tools like Tracy or the chrome trace visualizer (“catapult”) if you instrument your code a bit so it can spit out json traces. These give insights in where time is actually spent.

        1. 1

          Absolutely. Most developers only benchmark if there’s a serious problem, and most users are so inured to bad response times that they just take whatever bad experience they receive and try to use the app regardless. Most of the time it’s some stupid thing the devs did that they didn’t realize and didn’t bother checking for (oops, looks like we’re instantiating this object on every loop iteration, look at that.)

        2. 9

          Programs that take seconds to load or display a list are not just ignoring cache optimizations.

          That’s right. I hammered on the cache example because it’s easy to show an example of what a massive difference it can make, but I did not mean to imply that it’s the only reason. Basically, any time we lose track of what the computer must do, we risk introducing slowness. Now, I don’t mean that having layers of abstractions or using dictionary are inherently bad (they will likely have a performance cost, but it may be reasonable to reach another objective), but we should make these choices intentionally rather than going by rote, by peer pressure, by habit, etc.

          1. 5

            The article implies the programmer has access to low level details like cache memory layout, but if you are programming in Python, Lua, Ruby, Perl, or similar, the programmer doesn’t have such access (and for those languages, the trade off is developer ease). I’m not even sure you get to such details in Java (last time I worked in Java, it was only a year old).

            The article also makes the mistake that “the world is x86”—at work, we still use SPARC based machines. I’m sure they too have cache, and maybe the same applies to them, but micro-optimizations are quite difficult across different architectures (and even across the same family but different generations).

            1. 6

              The article implies the programmer has access to low level details like cache memory layout, but if you are programming in Python, Lua, Ruby, Perl, or similar, the programmer doesn’t have such access

              The level of control that a programmer has is reduced in favor of other tradeoffs, as you said, but there’s still some amount of control. Often, it’s found in those languages best practices. For example, in Erlang one should prefer to use binaries for text rather than strings, because binaries are a contiguous sequence of bytes while strings are linked lists of characters. Another example, in Python it’s preferable to accumulate small substrings in a list and then use the join method rather that using concatenation (full += sub).

              The article also makes the mistake that “the world is x86”—at work, we still use SPARC based machines. I’m sure they too have cache, and maybe the same applies to them, but micro-optimizations are quite difficult across different architectures (and even across the same family but different generations).

              I don’t personally have that view, but I realize that it wasn’t made very clear in the text, my apologies. Basically what I want myself and other programmers to be mindful of is mechanical sympathy — to not lose track of the actual hardware that the program is going to run on.

              1. 4

                I know a fun Python example. Check this yes implementation:

                def yes(s):
                  p = print
                  while True:
                    p(s)
                
                yes("y")
                

                This hot-loop will perform significantly better than the simpler print(s) because of the way variable lookups work in Python. It first checks the local scope, then the global scope, and then the built-ins scope before finally raising a NameError exception if it still isn’t found. By adding a reference to the print function to the local scope here, we reduce the number of hash-table lookups by 2 for each iteration!

                I’ve never actually seen this done in real Python code, understandably. It’s counter-intuitive and ugly. And if you care this much about performance then Python might not be the right choice in the first place. The dynamism of Python (any name can be reassigned, at any time, even by another thread) is sometimes useful but it makes all these lookups necessary. It’s just one of the design decisions that makes it difficult to write a high-performance implementation of Python.

                1. 3

                  That’s not how scoping works in Python.

                  The Python parser statically determines the scope of a name (where possible.) If you look at the bytecode for your function (using dis.dis) you will see either a LOAD_GLOBAL, LOAD_FAST, LOAD_DEREF, or LOAD_NAME, corresponding to global, local, closure, or unknown scope. The last bytecode (LOAD_NAME) is the only situation in which multiple scopes are checked, and these are relatively rare to see in practice.

                  The transformation from LOAD_GLOBAL to LOAD_FAST is not uncommon, and you see it in the standard library: e.g., https://github.com/python/cpython/blob/main/Lib/json/encoder.py#L259

                  I don’t know what current measurements of the performance improvement look like, after LOAD_GLOBAL optimisations in Python 3.9, which reported 40% improvement: https://bugs.python.org/issue26219 (It may be the case that the global-to-local transformation is no longer considered a meaningful speed-up.)

                  Note that the transformation from global-to-local scope, while likely innocuous, is a semantic change. If builtins.print or the global print is modified in some other execution unit (e.g., another thread,) the function will not reflect this (as global lookups can be considered late-bound, which is often desirable.)

                  1. 8

                    I think this small point speaks more broadly to the dissatisfaction many of us have with the “software is slow” mindset. The criticisms seem very shallow.

                    Complaining about slow software or slow languages is an easy criticism to make from the outside, especially considering that the biggest risk many projects face is failure to complete or failure to capture critical requirements.

                    Given a known, fixed problem with decades of computer science research behind it, it’s much easier to focus on performance—whether micro-optimisations or architectural and algorithmic improvements. Given three separate, completed implementations of the same problem, it’s easy to pick out which is the fastest and also happens to have satisfied just the right business requirements to succeed with users.

                    I think the commenters who suggest that performance and performance-regression testing should be integrated into the software development practice from the beginning are on the right track. (Right now, I think the industry is still struggling with getting basic correctness testing and documentation integrated into software development practice.)

                    But the example above shows something important. Making everything static or precluding a number of dynamic semantics would definitely give languages like Python a better chance at being faster. But these semantics are—ultimately—useful, and it may be difficult to predict exactly when and where they are critical to satisfying requirements.

                    It may well be the case that some languages and systems err too heavily on the side of allowing functionality that reduces the aforementioned risks. (It’s definitely the case that Python is more dynamic in design than many users make use of in practice!)

                    1. 2

                      Interesting! I was unaware that the parser (!?) did that optimization. I suppose it isn’t difficult to craft code that forces LOAD_NAME every time (say, by reading a string from stdin and passing it to exec) but I find it totally plausible that that rarely happens in non-pathological code.

                      Hm. For a lark, I decided to try it:

                      >>> def yes(s):
                      ...  exec("p = print")
                      ...  p(s)
                      ... 
                      >>> dis.dis(yes)
                        2           0 LOAD_GLOBAL              0 (exec)
                                    2 LOAD_CONST               1 ('p = print')
                                    4 CALL_FUNCTION            1
                                    6 POP_TOP
                      
                        3           8 LOAD_GLOBAL              1 (p)
                                   10 LOAD_FAST                0 (s)
                                   12 CALL_FUNCTION            1
                                   14 POP_TOP
                                   16 LOAD_CONST               0 (None)
                                   18 RETURN_VALUE
                      >>> yes("y")
                      Traceback (most recent call last):
                        File "<stdin>", line 1, in <module>
                        File "<stdin>", line 3, in yes
                      NameError: name 'p' is not defined
                      
                2. 5

                  and for those languages, the trade off is developer ease

                  I heard Jonathan Blow make this point on a podcast and it stuck with me:

                  We’re trading off performance for developer ease, but is it really that much easier? It’s not like “well, we’re programming in a visual language and just snapping bits together in a GUI, and it’s slow, but it’s so easy we can make stuff really quickly.” Like Python is easier than Rust, but is it that much easier? In both cases, it’s a text based OO language. One just lets you ignore types and memory lifetimes. But Python is still pretty complicated.

                  Blow is probably a little overblown (ha), but I do think we need to ask ourselves how much convenience we’re really buying by slowing down our software by factors of 100x or more. Maybe we should be more demanding for our slow downs and expect something that trades more back for it.

                  1. 2

                    Like Python is easier than Rust, but is it that much easier?

                    I don’t want to start a fight about types but, speaking for myself, Python became much more attractive when they added type annotations, for this reason. Modern Python feels quite productive, to me, so the trade-off is more tolerable.

                    1. 1

                      It depends upon the task. Are you manipulating or parsing text? Sure, C will be faster in execution, but in development?

                      At work, I was told to look into SIP, and I started writing a prototype (or proof-of-concept if you will) in Lua (using LPeg to parse SIP messages). That “proof-of-concept” went into production (and is still in production six years later) because it was “fast enough” for use, and it’s been easy to modify over the years. And if we can ever switch to using x86 on the servers [1], we could easily use LuaJIT.

                      [1] For reasons, we have to use SPARC in production, and LuaJIT does not support that architecture.

                3. 7

                  The trick about cache optimizations is that that can be a case where, sure, individually you’re shaving nanoseconds off, but sometimes those are alarmingly common in the program flow and worth doing before any higher-level fixes.

                  To wit: I worked on a CAD system implemented in Java, and the “small optimization” of switching to a pooled-allocation strategy for vectors instead of relying on the normal GC meant the difference between an unusable application and a fluidly interactive one, simply because the operation I fixed was so core to everything that was being done.

                  Optimizing cache hits for something like mouse move math can totally be worth it as a first step, if you know your workload and what code is in the “hot” path (see also sibling comments talking about profiling).

                  1. 6

                    They’re using slow languages (or language implementations, for the pedantics out there) like cpython where even the simplest operations require a dictionary lookup

                    I take issue with statements like this, because the majority of code in most programs is not being executed in a tight loop on large enough data to matter. The overall speed of a program has more to do with how it was architected than with how well the language it’s written in scores on microbenchmarks.

                    Besides, Python’s performance cost isn’t a just an oversight. It’s a tradeoff that provides benefits elsewhere in flexibility and extensibility. Problems like serialization are trivial because of meta-programming and reflection. Complex string manipulation code is simple because the GC tracks references for you and manages the cleanup. Building many types of tools is simpler because you can easily hook into stuff at runtime. Fixing an exception in a Python script is a far more pleasant experience than fixing a segfault in a C program that hasn’t been built with DWARF symbols.

                    Granted, modern compiled languages like Rust/Go/Zig are much better at things like providing nice error messages and helpful backtraces, but you’re paying a small cost for keeping a backtrace around in the first place. Should that be thrown out in favor of more speed? Depending on the context, yes! But a lot of code is just glue code that benefits more from useful error reporting than faster runtime.

                    For me, the choice in language usually comes down to how quickly I can get a working program with limited bugs built. For many things (up to and including interactive GUIs) this ends up being Python, largely because of the incredible library support, but I might choose Rust instead if I was concerned about multithreading correctness, or Go if I wanted strong green-thread support (Python’s async is kinda meh). If I happen to pick a “fast” language, that’s a nice bonus, but it’s rarely a significant factor in that decision making process. I can just call out to a fast language for the slow parts.

                    That’s not to say I wouldn’t have mechanical sympathy and try to keep data structures flat and simple from the get go, but no matter which language I pick, I’d still expect to go back with a profiler and do some performance tuning later once I have a better sense of a real-world workload.

                    1. 4

                      To add to what you say: Until you’ve exhausted the space of algorithmic improvements, they’re going to trump any microoptimisation that you try. Storing your data in a contiguous array may be more efficient (for search, anyway - wait until you need to insert something in the middle), but no matter how fast you make your linear scan over a million entries, if you can reframe your algorithm so that you only need to look at five of them to answer your query then a fairly simple data structure built out of Python dictionaries will outperform your hand-optimised OpenCL code scanning the entire array.

                      The kind of microoptimisation that the article’s talking about makes sense once you’ve exhausted algorithmic improvements, need to squeeze the last bit of performance out of the system, and are certain that the requirements aren’t going to change for a while. The last bit is really important because it doesn’t matter how fast your program runs if it doesn’t solve the problem that the user actually has. grep, which the article uses as an example, is a great demonstration here. Implementations of grep have been carefully optimised but they suffer from the fact that requirements changed over time. Grep used to just search ASCII text files for strings. Then it needed to do regular expression matching. Then it needed to support unicode and do unicode canonicalisation. The bottlenecks when doing a unicode regex match over a UTF-8 file are completely different to the ones doing fixed-string matching over an ASCII text file. If you’d carefully optimised a grep implementation for fixed-string matching on ASCII, you’d really struggle to make it fast doing unicode regex matches over arbitrary unicode encodings.

                      1. 1

                        The kind of microoptimisation that the article’s talking about makes sense once you’ve exhausted algorithmic improvements, need to squeeze the last bit of performance out of the system, and are certain that the requirements aren’t going to change for a while.

                        To be fair, I think the article also speaks of the kind of algorithmic improvements that you mention.

                      2. 3

                        Maybe it’s no coincidence that Django and Rails both seem to aim at 100 concurrent requests, though. Both use a lot of language magic (runtime reflection/metaprogramming/metaclasses), afaik. You start with a slow dynamic language, and pile up more work to do at runtime (in this same slow language). In this sense, I’d argue that the design is slow in many different ways, including architecturally.

                        Complex string manipulation code is simple because the GC tracks references for you

                        No modern language has a problem with that (deliberately ignoring C). Refcounted/GC’d strings are table stakes.

                        I personally dislike Go’s design a lot, but it’s clearly designed in a way that performance will be much better than python with enough dynamic features to get you reflection-based deserialization.

                      3. 1

                        All the times I had an urge to fire up a profiler the problem was either an inefficient algorithm (worse big-O) or repeated database fetches (inefficient cache usage). Never have I found that performance was bad because of slow abstractions. Of course, this might be because of software I work with (Python web services) has a lot of experiences on crafting good, fast abstractions. Of course, you can find new people writing Python that don’t use them, which results in bad performance, but that is quickly learned away. What is important if you want to write performant Python code, is to use as little of “pure Python” as possible. Python is a great glue language, and it works best when it is used that way.

                        1. 1

                          Never have I found that performance was bad because of slow abstractions.

                          I have. There was the time when fgets() was the culprit, and another time when checking the limit of a string of hex digits was the culprit. The most surprising result I’ve had from profiling is a poorly written or poorly compiled library.

                          Looking back on my experiences, I would have to say I’ve been surprised by a profile result about half the time.

                        2. 1

                          As a pedantic out here, I wanted to say that I appreciate you :)

                        1. 1

                          I feel like I need corroboration to know if this is really good advice or not. 😄

                          1. 4

                            I include an implementation of strlcpy if that’s missing on the target, it’s not a complex function to implement if you cannot include a third-party implementation for some reason.

                            If you can replace strcpy with memcpy, then it’s true you should have been using memcpy in the first place. However you cannot always replace strcpy with memcpy with the same efficiency, and strlcpy has the correct semantics.

                            I do agree the *_s variants are pointless.

                            1. 2

                              Totally agree! As I read this post, I remembered this other post on the same topic.

                              strcpy, like gets, is fundamentally unsafe and there’s no way to use it safely unless the source buffer is known at compile-time. I know multiple people who give out the advice to use strncpy instead of strcpy, but I’m not a believer. Using strncpy requires that you know the length of the destination buffer, and if you know that, then you could be using memcpy instead.

                              If you want a safer strcpy, here you go:

                              char * strcpy_improved(char * dest, const char * src, size_t dest_size) {
                                  size_t length = strlen(src) + 1;
                                  if (length > dest_size) {
                                      return NULL;
                                  }
                                  memcpy(dest, src, length);
                                  return dest;
                              }
                              

                              This is basically how strcpy is implemented in glibc, with the length check added. This is what unaware people believe strncpy does.

                              This still isn’t totally foolproof – if src is not a valid string, or either pointer is NULL, that’s undefined behaviour. Also, technically all identifiers should be unique in their first 6 characters, and all identifiers beginning with str are reserved anyway, but that’s C programming for you.

                              I honestly don’t know what the point of strncpy is. I understand the urge to strcpy to copy a short string into a large buffer; it only copies as many bytes as necessary. But strncpy does not do this – it copies the string into the buffer, and then it fills the buffer with null bytes until it has written as many bytes as you told it to. Basically, it’s worse than memcpy in every way unless this particular weird behaviour is what you really want. To call it “niche” is not only fair, it’s kind.

                              2 years ago, I submitted to a C library a pull request which changed a strncpy to a memcpy when gcc started issuing warnings about bad uses of strncpy. I kinda wish gcc would issue a warning for any use of the str*cpy functions, possibly with a link to some helpful advice on what to do instead.

                              1. 3

                                Using strncpy requires that you know the length of the destination buffer, and if you know that, then you could be using memcpy instead

                                The length of the destination buffer is the maximum number of characters you can copy. The length of the source string is the maximum number that you want to copy. In any cases where the former is smaller than the latter, you want to detect an error.

                                The strlcpy function is good for this case. It doesn’t require you to scan the source string twice (once to find the null terminator, once to do the copy) and it lets you specify the maximum size. It always returns a null-terminated buffer (unlike strncpy, which should never be used because if the destination is not long enough then it doesn’t null terminate and so is spectacularly dangerous).

                                There are three cases:

                                • You know the length of the source and the size of the destination. Use memcpy.
                                • You know the size of the destination. Use strlcpy, check for error (or don’t if you don’t care about truncation - the result is deterministic and if you’ve asked for a string up to a certain size then strlcpy may enforce this for you).
                                • You don’t want to think about the size of the destination. Use strdup and let it allocate a buffer that’s big enough for your string.

                                99% of cases I’ve used, strdup is the right thing to do. Don’t worry about the string length, just let libc handle allocating a buffer for it. For most of the rest, strlcpy is the right solution. If memcpy looks like the right thing, you’re probably dealing with some abstraction over C strings, rather than raw C strings. If you’re willing to do that, use C++’s std::string, let it worry about all of this for you, and spend your time on your application logic and not on tedious bits of C memory management.

                                1. 1

                                  strlcpy is better, and if truncation to the length of your dest buffer is what you want, then it’s the best solution. More commonly, I want to reallocate a larger buffer and try again, but you’re correct that strdup is a much simpler way to get that result most of the time.

                                  I decided to look up the Linux implementation of strlcpy, and it works the same way as my function above: a strlen and then a memcpy. So it does still traverse the array twice, but I don’t see why that’s a problem.

                                  1. 2

                                    I decided to look up the Linux implementation of strlcpy, and it works the same way as my function above: a strlen and then a memcpy. So it does still traverse the array twice, but I don’t see why that’s a problem.

                                    I found that a bit surprising, but that’s the in-kernel version so who knows what the constraints were. The FreeBSD version (which was taken from OpenBSD, which is where the function originated) doesn’t. The problem with traversing the string twice is threefold:

                                    • If the string is large, the first traversal will evict parts of beginning from L1 cache so you’ll hit L1 misses on both traversals.
                                    • You are far more likely to want to use the destination soon than the source, but the fact that you’ve read it twice in quick succession will hint the caches that you’re likely to use the source again and they’ll prioritise evicting things that you don’t want.
                                    • [Far less important on modern CPUs]: You’re running a load more instructions because you have all of the loop logic twice.

                                    The disadvantage of this is that it’s far less amenable to vectorisation than the strlen + memcpy version. Without running benchmarks, I don’t know which is going to be slower. The cache effects won’t show up in microbenchmarks so I’d need to find a program that used strlcpy on a hot path for it to matter.

                                    1. 1

                                      You raise some compelling points! And compiler optimizations will throw another wrench in there. Without doing rigorous benchmarking, this is all speculation, but it’s interesting speculation.

                                2. 2

                                  I honestly don’t know what the point of strncpy is. I understand the urge to strcpy to copy a short string into a large buffer; it only copies as many bytes as necessary. But strncpy does not do this – it copies the string into the buffer, and then it fills the buffer with null bytes until it has written as many bytes as you told it to. Basically, it’s worse than memcpy in every way unless this particular weird behaviour is what you really want. To call it “niche” is not only fair, it’s kind.

                                  strncpy was intended for fixed-length character fields such as utmp; it wasn’t designed for null-terminated strings. It’s error prone so I replace strnc(at|py) with strlc(at|py) or memmove.

                                  1. 2

                                    strcpy, like gets, is fundamentally unsafe and there’s no way to use it safely unless the source buffer is known at compile-time.

                                    Huh? The danger of gets is completely different from that of strcpy (and the former is certainly worse) – gets does I/O, taking in arbitrary, almost-certainly unknown input data; strcpy operates entirely on data already within your program’s address space and (hopefully) already known to be a valid, NUL-terminated string of a known length. Yes, it is very possible (easy, even) to screw that up and end up with arbitrary badness, but it’s a lot easier to get right than ensuring that whatever bytes gets pulls in are going to contain a linefeed within the expected number of bytes (the only way I can think of offhand for using gets safely would involve a dup-ing a pipe or socketpair or something you created yourself to your own stdin and writing known data into it).

                                    (This is not to say that strcpy is great, nor to negate the point of the article that it can and quite arguably should be replaced by memcpy in most cases. But it’s not as grossly broken as gets.)

                                    1. 1

                                      Okay, I might have exaggerated there. :^)

                                      strcpy is suitable for some situations – copying static strings, or strings that are otherwise of a known length. In the latter case, memcpy is better. In the former case, I actually think strcpy is fine, even though this article argues against it. I would expect a modern compiler to optimize those copies to memcpy anyway.

                                      gets is basically totally unusable in all situations. Somebody doing something bizarre like you mentioned should probably rethink their approach…

                                1. 4

                                  I don’t see why to disallow // on NonEmpty. Sure it will never return the default case, but that’s fine.

                                  1. 2

                                    There might be no mathematical reason to exclude it, but using (// dfault) instead of head for a NonEmpty would make me stop and question whether I or the author misunderstood what it would do.

                                    1. 4

                                      If they know they have a NonEmpty, sure, but since it’s a polymorphic operator I would expect it to sometimes be used in a polymorphic context.

                                  1. 1

                                    I can get behind cases where spacing and precedence match. I’d love if automatic code formatters and linters to followed that style, so that if you write w+x / y+z, you’ll see the error when it shows up as w + x/y + z.

                                    1. 1

                                      I believe gofmt does this!

                                    1. 3

                                      Additionally one prefix operator for negation would be nice. Since - is already in use, I chose ~.

                                      It seems like it might be possible to treat - as either a binary or a unary operator depending on the context: if it has whitespace before it but not after it—which is illegal for other operators—it’s the unary operator, and otherwise it’s the binary operator. That would make it possible to write things like 3 + -2. (One downside is that the mandatory whitespace on the left would prevent you from writing things like 20+-10 / 2, which would require parentheses if you were going to insist on negating the 10.)

                                      1. 2

                                        Haskell does something similar: if - has nothing on its right-hand side, it’s a prefix negation operator; otherwise it’s subtraction. This rule is an exception to how the rest of the language works, but it’s considered an acceptable compromise.

                                        Another thing I considered was making + and - part of the number literal, so that -1 can be parsed as a single token in addition to - being an infix operator. This makes parsing numbers a bit more complicated, but 1 -2 is still invalid, so it seems to work. The practical effect of this is similar to your idea, and I believe Fortran does this.

                                        I didn’t agonize over the decision though. This calculator is mostly a proof-of-concept for an idea that I wanted to try out, and the simplest thing was to pick a different symbol for unary negate.

                                        It’s worth noting too that the concept of -1 can be understood as beginning with an implicit 0, as in 0-1. This makes a negation operator pretty unnecessary anyway!

                                      1. 2

                                        A few thoughts:

                                        • Syscalls should be atomic and “just work”; the caller should not need to check that all bytes were actually written, and if not, try again. There might be room for super-mega-low-level syscalls that behave like the current UNIX ones, but they should not be the defaults.
                                        • Filenames should be case-insensitive and allow only alphanumeric characters, dots, and dashes. They certainly do not need #, &, ;, spaces, and (maybe worst of all) newlines.
                                        • Choose between environment and arguments; they are too redundant to need both. Either way, from the shell they could use something like the current argument syntax, coming after the program name.
                                        1. 4

                                          File names need spaces; they’re intended for humans, not regexes.

                                          1. 3

                                            Syscalls should be atomic and “just work”; the caller should not need to check that all bytes were actually written, and if not, try again. There might be room for super-mega-low-level syscalls that behave like the current UNIX ones, but they should not be the defaults.

                                            ITS did it, famed example of “worse is better”.

                                            1. 2
                                              • Syscalls: How about just in time custom system calls? The whole concept it mindblowing, and it ended up running SunOS programs faster than SunOS on the same hardware. Too bad it was never released as source code.
                                              • Filename: Back in college, some friends and I were in the process of designing our dream operating system, and the filesystem I came up with not only would allow filenames to be arbitrary text, but anything—like an image, a sound, even itself (although that would probably be A Bad Thing). These days, I would probably just disallow control characters in a file name, and if we’re doing a whole new operating system, that assumption can be built in and not hacked about.
                                              • Environment vs arguments: I think environment variables have a place. They’re useful when you need to specify some value, but don’t want to do it via arguments, or it would be inconvenient each time. If a program wants to display some text on the console, it can use $PAGER to get the program to use. Or if you want to edit something, it can launch $EDITOR. And it works when your preferred shell might not support aliases or functions.
                                              1. 1

                                                Data belongs to the user. Therefore, the user should be allowed to name their data whatever they like. The mundane nonsense of a \0 terminated string bubbling up and interfering with a user’s ownership of their own information is an insulting failure of abstraction.

                                              1. 7

                                                Without any changes, I’m scoring quite okay : https://www.websitecarbon.com/website/raymii-org/

                                                • Hurrah! This web page is cleaner than 94% of web pages tested
                                                • Only 0.06g of CO2 is produced every time someone visits this web page.
                                                • Over a year, with 10,000 monthly page views, this web page produces
                                                • 7.39kg of CO2 equivalent. The same weight as 0.05 sumo wrestlers and as much CO2 as boiling water for 1,001 cups of tea
                                                • 1 tree This web page emits the amount of carbon that 1 tree absorbs in a year.
                                                • 16kWh of energy That’s enough electricity to drive an electric car 100km.
                                                1. 6

                                                  I just redesigned my site to use less resources, including images. Beat you by 4 percentage points :-)

                                                  1. 4

                                                    I redesigned mine quite a while back and while I wish I could say I beat both of you the best I can do is as tie between @johnaj and I. https://www.websitecarbon.com/website/jeremy-marzhillstudios-com/

                                                    1. 2

                                                      I also redesigned mine awhile back in the name of speed and simplicity. I guess that’s good for the environment because apparently I beat everyone ever: https://www.websitecarbon.com/website/cosmo-red/

                                                      1. 4

                                                        Haha that’s cool, I wonder what type of car that is, I want one:

                                                        • 0kWh of energy
                                                        • That’s enough electricity to drive an electric car 2km.
                                                        1. 3

                                                          Maybe it’s got a sail? It’s a sailcar!

                                                        2. 3

                                                          My results tell me my car would move 1km further, but I’m using sustainable energy, so who wins that?

                                                          1. 2

                                                            Heh, I vote for you. Where do you host your website?

                                                            1. 3

                                                              Strato, but mainly because I get 200GB for 5 Euro a month.

                                                    2. 4

                                                      I have a small personal, statically-generated blog which managed to score 99% by emitting 0.00g.

                                                      https://www.websitecarbon.com/website/danso.ca

                                                      Which makes me wonder about the methodology, obviously.

                                                      A weakness of this method is that it does not account for the cost of building the website from the markdown and Haskell source files. Compiling a program is not free, nor is running it. But maybe that cost is negligible compared to serving the website after it’s made?

                                                    1. 4

                                                      I never had much respect for alarmism. We’ve had it, in regard to climate, for decades, and I only too well remember Al Gore warning us in 2008 of an ice-free Arctic by 2013, just to give one of many examples. Greta Thunberg is the next up-and-coming generation of climate alarmists and given we do in fact have a global warming, the human factor is yet to be assessed (consider we are easing out of a small ice-age that just, out of chance, had its lowest point in the mid 1800’s when humans started measuring temperatures systematically).

                                                      However, I still wholeheartedly support renewable energy and resource savings, because we live on a finite planet with finite resources. We should do anything to save resources and energy, but not fall in panic over it or embrace ridiculous measures that are not sustainable in the long term. Maybe it’s needed to push the majority of people, but as a rational person I feel insulted by this.

                                                      Measuring everything in “CO2 emissions” is valid, but for a different reason, in my opinion, than to mitigate the effects on the atmosphere: The carbon we emit comes from fossil fuels, which are one finite resource I think should not be “wasted”. Given “CO2 emissions” directly correlate with carbon-based fuel-consumption, it may be a bit mislabeled, but generally valid.

                                                      In terms of web development: Stop bloating your websites with too much CSS, JavaScript and excessive markup and reduce the transferred weight, but don’t panic over it or say that a website is “killing the planet”. This is an industry-wide problem and needs to be solved at scale. When this doesn’t change, your website won’t make much of a difference compared to the few major players.

                                                      1. 16

                                                        the human factor is yet to be assessed

                                                        I thought that in 2020 it is a common knowledge that humans are without a doubt responsible for global climate crisis. And temperatures are measured also by other means than direct ones. That includes geological ones.

                                                        1. 3

                                                          Indeed only a fool would say that we humans, who affect the planet in so many profound ways, have no influence on the climate. The question is: How much? An everlasting ethos, in my opinion, is resource-saving, but it needs to be balanced so we don’t throw away what we’ve achieved as a species.

                                                          1. 11

                                                            What is missing in this analysis by Carbon Brief? Most of the current natural phenomena actually contribute to global cooldown and work in our favour. Humanity carbon footprint managed to beat even that.

                                                            1. 4

                                                              Climate is extremely complex, and one can’t really predict most things. I may bring out a strawman here, but how can we be so certain about centennial climate predictions (2°C-goal until 2100, for instance) when our sophisticated climate models can’t even accurately predict next week’s weather?

                                                              But as I said in my first comment, my biggest problem is the alarmism and I’m not even denying the human influence on world climate. So I’m actually on your side and demanding the same things, only with a different viewpoint.

                                                              1. 9

                                                                how can we be so certain about centennial climate predictions (2°C-goal until 2100, for instance) when our sophisticated climate models can’t even accurately predict next week’s weather?

                                                                Because weather and climate are not the same. We can’t model turbulent flow in fluid systems, but we can predict when they change from laminar to turbulent on a piece of paper. We can’t model how chemical reactions actually work at an atomic level, but whether or not they should take place is another simple calculation. We can’t model daily changes in the stock market, but long-term finance trends are at least vaguely approachable.

                                                                1. 16

                                                                  I’m not even denying the human influence on world climate.

                                                                  you said, “the human factor is yet to be assessed,” when it has been assessed again and again by many well-funded organizations. that’s denial, bucko

                                                                  1. 1

                                                                    No, it’s not denial and science is not a religion. Assessment means studying an effect, and I still do not think that the foregone conclusion of 100% human influence is substantial. It’s less than that, but not 0%, which would make me a denier.

                                                                    1. 1

                                                                      Assessment means studying an effect

                                                                      so by “the human factor is yet to be assessed,” did you mean that the effect has not been studied? are you not denying that the human factor has been studied?

                                                                      typically the category of “denial” doesn’t mean you think a claim has a 0% chance of being correct; most people are not 100% certain of anything and the concept of denial is broader than that in common speech. organizations of scientists studying climate change are very confident that it is largely human caused; if your confidence in that claim is somewhere nominally above 0%, it would still mean you think it is most likely untrue, and you would be denying it.

                                                                      1. 1

                                                                        An effect can be heavily studied but still inconclusively. From what I’ve seen and read, the human factor is obviously there and not only marginally above 0%, most probably way beyond that, but I wouldn’t zero out other factors either. If that means denial to you, then we obviously have different definitions of the word.

                                                                        1. 1

                                                                          saying the human factor hasn’t been assessed casts doubt on it. now you are saying it is “obviously there” which is quite different.

                                                                    1. 5

                                                                      The only thing I can do, as an individual, is to adapt, prepare and overcome. In my initial comment, I already mentioned an example for wrong alarmist predictions, and they even date back to the 60’s! Moving the fence pole and saying the arctic ice will have disappeared in the next n years won’t help bring me on board. Al Gore back then cited “irrefutable” science and I remember being presented his movie in school, but his predictions all proved to be wrong.

                                                                      Still, we are on the same side, kel: Our footprints are unsustainably large, and I as an individual strive to reduce it whenever I can. The truth is, though, that even Germany, which only contributes 2% to global carbon emissions, doesn’t play much a role here, and the big players need systemic change.

                                                                      It’s funny, actually, given this pretty much rings with the individual argument of slimming down your website: When Google, Youtube, Medium, etc. don’t move along, it doesn’t make much of a difference.

                                                                      1. 11

                                                                        The only thing I can do, as an individual, is to adapt, prepare and overcome.

                                                                        It is both frustrating and liberating how little influence an individual has. However, in the moment you decided to post a number of comments on this site, you contribute to the public opinion forming process. I think that this gives you much more influence than immediately obvious. Discussions on sites like lobste.rs are read by many people, and every reader is potentially influenced by the opinions you or anyone else express here. And with great power comes great responsibility ;-) With that in mind, I am glad that other commenters challenged your initial comments about climate “alarmism” and prompted you to clarify them.

                                                                        1. 7

                                                                          germany is the most powerful state in the european pole of the tripolar world economic system. it has much to say about how other countries it is economically tied to are allowed and enabled to industrialize and maintain their standard of living. germans own plenty of carbon-emitting capital in countries that don’t have the same level of regulation, and they need to be made accountable for the effect they have on the world.

                                                                  2. 3

                                                                    so we don’t throw away what we’ve achieved as a species

                                                                    Do you truly think silly performative ecological politics are going to “throw away” your first world niceties or are you talking about how ecological collapse will likely trigger progressively even more massive failures in supply chains as we inevitably blow through 1.5C

                                                                    1. 3

                                                                      There’s more to the world than economics, e.g. achievements in human rights and freedoms. But I don’t want to go too off-topic here (we are on Lobste.rs, after all).

                                                                      1. 5

                                                                        achievements in human rights and freedoms

                                                                        None of this will matter when people living in most affected areas – that are suffering from climate crisis already (thanks to droughts, lands becoming effectively uninhabitable etc.), not to mention what will happen in the following years – will come to our first world demanding a place to live. And we will point our guns at them. As one of the commenters said: “Desperate people will do desperate things”. And all of this will happen over years, decades. Painstakingly.

                                                                        Unfortunately some people will write it off as plain alarmism while dismissing well proven scientific position. And the position is: I want to have good news but it looks really fucking bad. I’d love to ignore all those facts just to live a happier life but I find it hard. It saddens me deeply that behind that facade of freethinking, you pretty much made up your mind for good. I do not mean to insult you. It’s just the way you speak in all your comments that makes me think that way. I hope I am wrong. Eh, shame.

                                                                        One could consider famous Newsroom piece about climate change as an alarmism but unfortunately it seems to be very on point.

                                                                2. 9

                                                                  The planet will be fine. It’s the people who are fucked.

                                                                  George Carlin

                                                                  I almost want to agree with you, except that underestimating the impact of climate change has already cost society massively and climbing.

                                                                  Firstly, if you believe that our current rate of temperature change is historically typical, there’s an xkcd comic for you.

                                                                  I will go as far as to say that to consider climate change an existential threat are perhaps looking at it the wrong way. But I’m not about to start undermining their cause in this way because people tend toward apathy toward long-term threats and the cost of underestimating climate change is far greater than the risk of overestimating it. Climate change has already begun to have direct costs, both monetary and humanitarian.

                                                                  As an example of monetary cost, in Gore’s documentary he presents a demonstration of rising sea levels around Manhattan Island and makes a point that the September 11 memorial site will be below sea level.

                                                                  This might be true, but below sea level does not mean underwater. The flooding projection makes the assumption that humans are either going to do nothing about it and drown or are going to pack up New York and leave. I think neither scenario is likely.

                                                                  What will happen is that the rising sea level will be mitigated. The city will build huge-scale water-control mechanisms (such as levees). The cost of living on the island will rise sharply. Once in a while, this system will fail, temporarily flooding the homes of millions of people. They will bail it out and go on living.

                                                                  Not so bad, right? The catch is that the projected cost of this, in purely financial terms, is predicted to vastly outweigh the cost of reducing pollution now. And we don’t need to hit discrete targets to see a benefit – every gram of CO2 that we don’t emit today will reduce the amount of water in a nearly-certain future flooding event.

                                                                  This is beside the humanitarian cost.

                                                                  Climate change does not come without opportunities. Likely, the farming season in Canada and Russia will lengthen, leading to more food produced in those countries. Cool, but meanwhile in other places, the drought season will lengthen. People won’t be magically transported from one place to another; there are logistical, political, and sociological obstacles. People stuck in those regions will become increasingly desperate, and desperate people do desperate people things. With today’s weapons technology, that’s the kind of situation that really could lead to humanity’s extinction.

                                                                  So please be careful with the point-of-view that you present. You might not be wrong, but contributing to a culture that underestimates the oncoming danger is exactly what got us here in the first place.

                                                                  1. 5

                                                                    I’m not denying the danger or playing it down, and we can see current effects of global warming. We humans must adapt to it, or else we will perish. It would not be far-fetched to assume that this global warming might even lead to more famines that can kill millions of people.

                                                                    The problem I see is the focus on CO2, but resource usage has many forms. Many people find pleasure in buying EVs, while charging them with coal power and not really reducing their footprint a lot (new smartphone every year, lots of technological turnover, lots of flights, etc.). I’m sure half of the people accusing me of “playing it down” have a much larger “CO2 footprint” (I’d rather call it resource footprint) than I do.

                                                                  2. 9

                                                                    The climate has not changed like this before in human timescales. https://xkcd.com/1732/

                                                                    Today, denying human-induced climate change requires more than disagreeing with the scientific consensus on future predictions, it requires denying current events. The climate crisis is already here, and it already has a death toll.

                                                                    The good news is that you don’t need to update your understanding and stop swallowing narratives produced by fossil fuel corporations, although we could certainly use all the help we can get. You just need to get out of the way of people like Greta who are taking meaningful action to avert the climate crisis on a systemic level. If you live in the US, Sunrise Movement are extremely effective young organizers who deserve your respect. If all you have to offer is sniping from the sidelines, maybe you should rethink your contributions. Have you actually done anything to make the world a better place, or do you just complain about people who do the work?

                                                                    1. 5

                                                                      Given the many factors influencing climate itself and the models built to predict it, studies greatly diverge from each other. Big fossil fuel corporations cite the least-alarmist ones, and environmental extremists cite the most-alarmist ones. As always, the truth lies in the middle.

                                                                      It’s a great shame that people die from this, given it’s a negative effect of the fact that the entire industrial age (including urbanization and expansion) was built on the assumptions of a small ice age that we had until the 1850’s and 1900’s. The increasingly warm global temperature has its toll.

                                                                      My favourite example is the nordic spruce, which is the main tree for corporate wood production in Germany. It originally comes from the mountains, but was increasingly used during the industrialization and planted in normal plane land, which worked because the weather was still relatively cool. The few degrees of warming leads to a massive weakness of the trees, and our German forests, which are substantially made up of spruce monocultures, are infected with numerous diseases and pests because of this.

                                                                      Over the years I’ve read so many alarmist reports by big scientific players which proved to be completely false, which is okay. Scientists can err, especially with something as multivariate as climate. My view is that we should not only look at “CO2 emissions” as a mantra, but adapt to the changing climate (diversify forests, etc.) instead of turning this into yet another speculator’s paradise with CO2-certificates which help nothing but shift wealth.

                                                                      The real damning truth is the following: I live in Germany, and if one flipped a switch that would wipe Germany and all its inhabitants from the face of the earth, the global CO2 emissions would only drop by 2%. As always, it’s the big players (USA, China, etc.) that need to change systemically.

                                                                      Have you actually done anything to make the world a better place, or do you just complain about people who do the work?

                                                                      Not to sound too harsh, but I basically don’t matter, just like the individual Chinese or US person matters. Electronic vehicles won’t make a difference, because CO2 emissions are just offset to developing countries where the battery-components are mined and processed. Charging an EV in Germany means coal power, no matter how much you buy “eco” electricity, as it’s just a big shuffling on the energy market.

                                                                      I do my part not buying a phone or computer every year, driving a used car (Diesel), which is still more environmentally friendly than buying a new car which needs to be produced in the first place, buying regional, etc. These things, as an individual, make much more of a difference than buying a Tesla and continuing living the large lifestyle most people have gotten used to.

                                                                      1. 11

                                                                        As always, the truth lies in the middle.

                                                                        I want to call out this both-sides-ism. Basic shifting of the Overton Window can cause you to believe insane things if you assume that the truth always lies in the middle. Reasonable positions can seem extreme if you live in a society that, for example, has been shaped by fossil fuel billionaires for decades.

                                                                        It’s also wrong to ignore worst-case scenarios.

                                                                        There has been a great deal of discussion around the IPCC reports, which are very conservative (by which I mean cautious about only making predictions and proposals for which they have a great deal of evidence). Unlikely but catastrophic possibilities, such as the terrifying world without clouds scenario, also deserve attention. Beyond that are the “unknown unknowns”, the disaster scenarios that our scientists are not clever enough (or do not have the data) to anticipate.

                                                                        Global nuclear war or dinosaur killer asteroid impacts may seem unlikely today, but if we do not prepare for and take steps to avoid such cataclysms, someday we will get a very bad dice roll and reap the consequences.

                                                                        In other words, the obvious predictable results of global heating on our current trajectory are bad enough, and I do not consider discussing them to be alarmism, but edge cases that might be reasonably seen as alarmism I feel are underappreciated, rather than overpublicized as you seem to believe.

                                                                        In other words, the truth, rather than lying in the middle, might be significantly worse than any messaging from the mainstream climate movement suggests.

                                                                        1. 6

                                                                          I’ll just say that personal consumption habits are not what I’m talking about, although I can see why you would bring them up, given the article we are commenting on is about changing personal website design.

                                                                          Sustainability, and justice for those who suffer most in the climate crisis, will require changing how our society functions. It will require accounting for the true costs of our actions, and I’m not convinced that capitalism as we know it will ever hold corporations accountable for their negative externalities. It will require political change, on a local, national and global level. It will require grassroots direct action from the people. You as an individual can do little, but collectively I assure you we can change the world, for the better instead of for the worse.

                                                                          1. 6

                                                                            The role of the collective against the individual is of course a truism. The real costs of a product are often hard to reflect on. One good example is sustainably produced meat, which costs 6 times more than “normal” meat you can buy at the supermarket. Reducing meat consumption to once a week (instead of almost every day, which is insane) would greatly reduce the footprint of an individual, but I don’t hear greenpeace talking about reducing meat intake, even though it makes up 28% of global greenhouse gas emissions.

                                                                            Instead, we are told to “change society” and accept new legislation that fundamentally change not only our economies, which deserve some reform in many places, but also individual freedoms for questionable benefit other than certain profiteers in certain sectors.

                                                                            So I hope I didn’t come across as someone denying the effects of climate change. Instead, I don’t like the alarmism, which has been often debunked in the last decades, only to sell extreme political measures. A much more effective approach would be, I think, to urge people to reduce their resource footprint and allow them to make the right choices.

                                                                            To give an example, maybe the EU could stop funding mass-meat-production if they really cared about this topic at all. Because this stuff really undermines the credibility of the entire climate “movement”.

                                                                      2. 2

                                                                        (consider we are easing out of a small ice-age that just, out of chance, had its lowest point in the mid 1800’s when humans started measuring temperatures systematically).

                                                                        Have any sources so I can read more about this? First I’ve heard of this.

                                                                        1. 4

                                                                          Sure! There is a great paper called “Using Patterns of Recurring Climate Cycles to Predict Future Climate Changes” by Easterbrook et. al. (published in Evidence-Based Climate Science (Second Edition), 2016) which is sadly paywalled and I can’t fully share here, but there’s a great figure in it that shows temperature-readings from tree-rings in China.

                                                                          Between 800 and 1200, we had the global medieval warm period, which allowed people for instance to grow wine in England and is the reason why Greenland is called “green” land (because it wasn’t covered in ice when the vikings discovered around 900-1000). The temperatures were normal between 1200 and 1600, but were then followed by a “Little Ice Age” between 1600 and 1900. In general, one can indeed see that global temperatures are rising above the average over the last 2000 years, but it’s nothing unusual.

                                                                          To give one more example: Glaciers receding in Norway, due to the currently observable global warming, reveal tree logs and trading paths roughly from the Roman ages used between 300 and 1500. If you look at the aforementioned figure, this pretty much coincides with the extremely warm period beginning around 300. Even though it went below around 700, it never really go into a cold area which would’ve let the glacier “recover”, explaining that it has been used until 1500 when the next cold period (the Little Ice Age) started.

                                                                          I hope this was helpful to you!

                                                                          1. 6

                                                                            It sounds like you’re arguing that the current global temperature rise is not due to humans, or is just a natural temperature cycle coming to an end, which is extremely wrong. The slight cooling period you’re talking about did happen, but as of now both the speed and projected magnitude of the current temperature changes are unprecedented in human history.

                                                                            We can argue all day about specifically how bad things are going to get given the temperature rise, and how much someone’s stupid little personal website is going to contribute to it, but the fact that the temperature rise is man-made and is changing faster than any global temperature change ever in human history is supported by enough broad scientific consensus to be pretty much indisputable.

                                                                            1. 4

                                                                              This is a placeholder reply so I don’t forget (immediately quite busy), but there is no evidence the pre-industrial era “little ice age” was a global phenomenon.

                                                                              1. 4

                                                                                That could very well be! What I cited were results from Europe and Asia, and I would not be surprised if it turned out differently in other places of the world.

                                                                        1. 1

                                                                          I expect it has been proposed before, but to me it seems there might be a solution that makes all parties (reasonably) happy.

                                                                          Unreserve all unnecessary identifiers. If a future standard wishes to add tofloat to the standard library, then they do it. If a user wants to use this C2x feature, then they #define _C2x before any includes or compile with -std=c2x.

                                                                          1. 3

                                                                            This is essentially equivalent to Perl’s use v5 design (or Rust’s edition design, for a recent imitation). I agree there is a lot of wisdom in it.

                                                                          1. 4

                                                                            I think the paper draws the wrong conclusions. There are far too many reserved identifiers. In what world should identifiers like token, stride, stream, ERROR_SUCCESS, or member be reserved? Rather than add additional warnings, instead the C standard should relinquish its reservations on such common identifiers.

                                                                            1. 2

                                                                              The paper seems to agree with you, which is why it proposes to remove those words from the reserved identifiers list and recategorize them as potentially reserved in the future.

                                                                              I think it makes good sense. Realistically, the C language probably is not going to reserve token ever, but they want the ability to introduce identifiers like tofloat and tofastint without making code-breaking changes.

                                                                            1. 13

                                                                              In three steps you have renamed a git branch without making a big deal out of it, all while avoiding the wrath of internet reactionaries.

                                                                              Except for any tools/users/whatever that depend on the branch being called “master”. It’s not the best idea to depend on a branch instead of a SHA, but it does happen (e.g. https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36947, which caused our release process to break for a backport we were preparing).

                                                                              1. 7

                                                                                all while avoiding the wrath of internet reactionaries.

                                                                                On top of that, slipping this in at the end of the article is IMHO a bit underhanded. OP gets to fire off his barb, putting anyone who disagrees with the movement to rename branches into the bucket of “internet reactionary”. At the same time, it’s not a prominent point in the document, so anyone who objects to that characterisation is exposed to accusations of being petty.

                                                                                1. 4

                                                                                  Presumably, the “internet reactionaries” would be the ones who requested changing the name from master to main? That’s how I read it, at least.

                                                                                  I did also feel it was kind of unnecessary.

                                                                                  1. 2

                                                                                    As somebody else mentioned, a “reactionary” is conventionally a person of regressive or conservative leanings who resists efforts to change society by progressives. In this context, I used it to refer to people who feel that the word “master” is something worth protecting.

                                                                                    1. 2

                                                                                      Thank you for clarifying. I agree with your characterization.

                                                                                  2. 1

                                                                                    That line in the article had me slightly confused as a simple definition of reactionary is a “person who is opposed to political or social change or new ideas”. Surely by changing a branch name in the way described would attract the wrath of of “internet reactionaries”?

                                                                                    1. 2

                                                                                      I believe that the author’s claim is something like: this change is sufficiently simple that there is no ground for “internet reactionaries” to stand on.

                                                                                      1. 1

                                                                                        This is a fair summary of the piece.

                                                                                        I am aware that I brushed aside with a footnote some difficulties with changing certain upstream sources’s default “checked-out” branch. I really did not want to turn the piece into a GitHub web UI tutorial that would be fully obsolete in 6 months. Despite that, I still think this request is too modest to reasonably refuse.

                                                                                1. 2

                                                                                  Bizzarely my mobile provider (Three) blocks access to this domain. Works fine via WiFi!

                                                                                  1. 1

                                                                                    That’s really strange. I’ve been using it for several years via gandi.net. Do you have any guesses why?

                                                                                    1. 1

                                                                                      Unfortunately no idea; going direct to the IP works fine so a real mystery. Sadly there’s no mechanism to discuss this with them either, only thing I can think of is an accidental content block :/

                                                                                      1. 1

                                                                                        I suppose I asked for this treatment by describing myself as a “hacker”. :v(

                                                                                      2. 1

                                                                                        I’m on Three (UK) and can access your website fine

                                                                                    1. 2

                                                                                      Doesn’t the dog/cat example violate the strict aliasing rule?

                                                                                      1. 4

                                                                                        I’m not sure — is the rule taking effect at the time of pointer creation, or use?

                                                                                        In either case, if compilers accept that code, by Hyrum’s Law, it doesn’t matter what the standard thinks about it.

                                                                                        1. 2

                                                                                          An object can only have its stored value accessed via an lvalue of a compatible type. (C99 s. 6.5 p 7)

                                                                                          It is perfectly fine to merely copy the value of a pointer into an lvalue of an incompatible type as long as it is never dereferenced – although I cannot think of a reason why you would.

                                                                                          1. 3

                                                                                            There’s no requirement in C that sizeof(X*) and sizeof(Y*) are the same, so this might not be possible. char* and void* must be large enough to hold pointers to anything, but C on a sufficiently weird architecture pointers may be different sizes depending on the size of the values. On machines that have word-addressable memory, most types are just addresses but char* and void* are both an address plus a mask or offset.

                                                                                            This doesn’t matter for any mainstream architectures.

                                                                                            1. 2

                                                                                              I won’t call it good practice, but you can do this to store a pointer-to-X in a pointer-to-Y as long as you know that the latter is never dereferenced as such. I.e. you always convert it back to pointer-to-X before dereferencing it. You could for example (ab)use an unused structure member to hold a different kind of pointer than its declared type.

                                                                                          2. 2

                                                                                            That rule is commonly misunderstood. It’s permissible to convert a pointer to one type to a pointer to another type, it’s not permissible to dereference a pointer if it points to an “object” (that’s the terminology of the standard) of the wrong type (with some exceptions/allowances).

                                                                                            If I remember rightly converting a pointer from one type to another results in an implementation-defined value anyway, i.e. it doesn’t even necessarily point into the same object. Of course, with your typical compiler, it does.

                                                                                          1. 5

                                                                                            What does this do to all the branches forked off of master? Do they now track main?

                                                                                            What does this do to everyone else’s repository? Does their local master branch automatically start tracking the remote main branch?

                                                                                            1. 13

                                                                                              Branches don’t reference the branch they are forked from, they “just” have a common history which can be used to merge them.

                                                                                              1. 1

                                                                                                They do reference it as well as sharing a common history. Many git commands rely on that reference. You can change the referenced branch with git branch --set-upstream-to and see it with git branch -vv

                                                                                                1. 5

                                                                                                  I think you’re talking about two different things - remote relations vs what commit a branch was created from.

                                                                                              2. 8

                                                                                                All good questions!

                                                                                                Branches based on master do not “switch” to main, but since no tracked data has changed those branches can be automatically rebased.

                                                                                                Others who have master checked out and write permission could still push to master and re-create it. There’s no real solution to this unless the “upstream” can be set to reject branches named “master”.

                                                                                                1. 6

                                                                                                  No, it breaks clones, and every fork has to rename as well, or change the tracking.

                                                                                                  1. 1

                                                                                                    This is a response to that article, basically taking an position entirely counter to it, not a repost of the original. I’m not sure if this is suitable for folding?

                                                                                                    1. 4

                                                                                                      It is, it keeps the discussion together, and (though this isn’t one) it disincentivizes hot takes by not rewarding them with their own slot on the front page.

                                                                                                      1. 1

                                                                                                        Unfortunately the OP didn’t do so well so I didn’t even know this link was submitted. Thanks for folding!

                                                                                                        1. 1

                                                                                                          I appreciate the motivation, but I still find it disappointing and unfair that newer (and frankly, higher-quality) content is merged away from the front page where few will see it.

                                                                                                          I’m not sure that this is technically how folding works, but it appears that if multiple links are posted to similar topics within a time-span of roughly a few days, then they all share the “popularity factor” of whichever was posted first.

                                                                                                          I feel like, if an article spurs multiple other people to write about the same topic (as happened in this case), then people are still talking about this topic, and so deserves at least one spot on the front page.

                                                                                                          Maybe the comment threads could all be merged, but individual links preserved? Or all links on the topic could share the highest “popularity factor” instead?

                                                                                                          1. 2

                                                                                                            Stories are ranked by “hotness”, calculated here.

                                                                                                          2. 1

                                                                                                            I see, thanks for the clarification!

                                                                                                          3. 1

                                                                                                            Having a counterpoint in the same context is valuable.

                                                                                                        1. 10

                                                                                                          Didn’t Microsoft pay out a fat settlement for doing less than this with their browser?

                                                                                                          1. 11

                                                                                                            Because they were abusing their monopoly, which Apple doesn’t have, they are not even the biggest player in the market.

                                                                                                            1. 4

                                                                                                              Windows : PC :: Apple : Mobiles, then sure, no monopoly.
                                                                                                              Windows : Intel PCs :: Apple : A5 Mobiles, then 😉

                                                                                                              Or maybe it’s about what we can do with a particular piece of form factor? Was Windows a monopoly because of the network effects of its software ecosystem? Apple has one of its own, complete with exclusives.

                                                                                                              Microsoft enjoys so much power in the market for Intel-compatible PC operating systems that if it wished to exercise this power solely in terms of price, it could charge a price for Windows substantially above that which could be charged in a competitive market. Moreover, it could do so for a significant period of time without losing an unacceptable amount of business to competitors. In other words, Microsoft enjoys monopoly power in the relevant market.

                                                                                                              That Apple Tax.

                                                                                                              It all depends on how we draw the lines of monopoly.

                                                                                                              1. 2

                                                                                                                The Apple Tax is 110% a real thing on desktop/mobile form factors.

                                                                                                                On mobile it’s a whole other thing. Compare, for instance, their geekbench scores (in operations per second):

                                                                                                                The cheapest iphone ($400 USD iphone SE) scores 1326.

                                                                                                                The samsung galaxy ultra at $2000 scores 840 on the same benchmark.

                                                                                                                The cheapest iphone is nearly twice as fast (single-core) as the fastest android. On multi-core it’s still faster, but only slightly.

                                                                                                            2. 3

                                                                                                              I haven’t read the EULA for iOS in it’s entirety, but if it says that the default browser is not allowed to be changed then that is a solid (legal) defense for Apple. I did just check and it has a clause stating that you aren’t allowed to modify the software. Changing what application is opened when you click a link in the Mail app (for example) would likely constitute modifying the software in a court of law.

                                                                                                              I don’t understand why people still buy Apple products at this point.

                                                                                                              1. 6

                                                                                                                EULAs are generally unenforceable against private parties in the US* and Apple’s legal has far better things to do than to file frivolous lawsuits against people who jailbreak their phones to change the default browser. The situation with AOL and Microsoft fighting for the dominant browser doesn’t exist today, so an antitrust case against Apple for bundling Safari seems much weaker now. Remember that there are far more Android phones than iPhones today; in the 90s, Windows had over 90% of the market.

                                                                                                                *If you’re an iPhone reseller and you jailbreak the phones you’re selling to change the default browser, then Apple might go after you. If you’re a private party, nobody cares.

                                                                                                                1. 1

                                                                                                                  EULAs are generally unenforceable against private parties in the US* and Apple’s legal has far better things to do than to file frivolous lawsuits against people who jailbreak their phones to change the default browser.

                                                                                                                  This is very true. My point however, was that somebody who tries to sue Apple because they don’t provide this functionality wouldn’t get very far because of the EULA that they (either implicitly or explicitly) agreed to.

                                                                                                                  Remember that there are far more Android phones than iPhones today; in the 90s, Windows had over 90% of the market.

                                                                                                                  I believe this is one of the reasons that we haven’t seen any of these lawsuits.

                                                                                                                  Edit: accidentally posted with an edit for my other comment.

                                                                                                            1. -1

                                                                                                              This appears to be an advertising publication for Github. I have flagged it spam.

                                                                                                              While obviously new Github features would be of interest to many readers here, in a recent similar post on Lobsters it was mentioned that anybody who really wants to know about these is almost certainly receiving their email publications.

                                                                                                              1. 1

                                                                                                                Look at the discussion it’s spawned though: much of it is primarily about what the impact of the product launch will be, freedom, etc. You can’t get that from a GitHub email.

                                                                                                                I missed the post you’re referring to, would you mind linking it?

                                                                                                              1. 1

                                                                                                                In these testing times it is important that we all try our best to support our governments.

                                                                                                                Uh, what? I don’t think I’ve ever heard this opinion expressed until now.

                                                                                                                1. 2

                                                                                                                  While it sounds cliché, the only way we can get through the pandemic is cooperation. In this instance that means supporting our governments. As a European, it seems mad to me that Americans are protesting about lockdown, as it puts everyone at risk. Most of the people in Europe that I’ve spoken to share my opinion here.

                                                                                                                1. 5

                                                                                                                  No system is perfect, and I will choose a vulnerable one over a malicious one, obviously.

                                                                                                                  1. 2

                                                                                                                    Then you should use use GrapheneOS, Replicant or LineageOS on regular hardware. I wouldn’t call them malicious but they are far more secure than PureOS or postmarketOS.

                                                                                                                  1. 17

                                                                                                                    And the corollary: The first line of your shell script should be

                                                                                                                    #!/bin/sh
                                                                                                                    

                                                                                                                    If you need bashism for something complex that needs to be maintainable, use #!/usr/bin/env bash as the article suggests, but 99% of ‘bash’ scripts that I’ve seen are POSIX shell scripts and work just fine with any POSIX shell.

                                                                                                                    1. 16

                                                                                                                      And the corollary: The first line of your shell script should be

                                                                                                                      I disagree, enormously. There are some specific, isolated, and extremely uncommon situations where it’s important you target sh; the rest of the time, you are constraining yourself for no appreciable benefit.

                                                                                                                      In particular, if you did go and change the interpret to /bin/sh, you also need to change the second line, since (at least) pipefail is a bashism.

                                                                                                                      1. 3

                                                                                                                        I think being constrained to a smaller and simpler set of features is the appreciable benefit.

                                                                                                                        This applies doubly if somebody is going to have to maintain this code.

                                                                                                                        1. 2

                                                                                                                          One does not necessarily follow the other. I’ve seen horrific apparently-posix-sh scripts with the most unmaintainable contortions to avoid using a widely known bash feature that would make the code far more legible. Eg bash arrays.

                                                                                                                          1. 1

                                                                                                                            I don’t disagree; often there is no nice way in posix sh to do something that a bash array could. But I’ve found that once you begin to feel a desire for any kind of data structure in your program, it’s probably time to switch to a more robust programming language than (ba)sh.

                                                                                                                            1. 2

                                                                                                                              I generally agree. When I look back at my professional career (16 years and counting), I’ve spent a remarkable amount of it essentially writing bash scripts. Which is surprise.

                                                                                                                              Most recently, for containers which can’t have a larger interpreter inside them for space reasons, ruling out Python etc

                                                                                                                        2. 1

                                                                                                                          In particular, if you did go and change the interpret to /bin/sh, you also need to change the second line, since (at least) pipefail is a bashism.

                                                                                                                          Thanks. I have little experience with non-BASH shells and I didn’t want to assume that this would work in on a non-BASH script. Good to learn that it doesn’t.

                                                                                                                        3. 14

                                                                                                                          Good POSIX shell reference here:

                                                                                                                          http://shellhaters.org/

                                                                                                                          1. 5

                                                                                                                            Fair point. I am trying to target the 100%. It feels to me that’s it’s best to program against bash than risk in failing in a corner case on a shell which I never tested my script against.

                                                                                                                            1. 2

                                                                                                                              For 99% of shell scripts its a non issue. A lot of people use things like arrays in bash when simple quoted strings work on every shell. Writing portable across multiple bourne shell interpreters is easier than you think it is. Most of my scripts run under any of ksh/zsh/bash/sh with zero changes. And testing wise… there isn’t much to test tbh, outside of -o pipefail only works for zsh and bash. Generally the -u catches those cases anyway.

                                                                                                                              What kind of corner case are you thinking of? Also as yule notes, run shellcheck on your scripts when writing them. That will reduce your edge cases significantly.

                                                                                                                              1. 4

                                                                                                                                The annoying thing is that bash doesn’t disable bash extensions when invoked as sh. A couple weeks ago, I had to debug why a script written by a colleague didn’t work. It turned out that he used &> to redirect both stderr and stdout to a file. I didn’t even know this existed, and he didn’t even know it wasn’t portable. The nasty part is that this doesn’t cause an error. Instead, foo &> bar is parsed as foo & (not sure what happens to the > bar bit; file is created but not hooked up to stdout of foo), which causes race conditions with whatever comes next relying on foo having already completed.

                                                                                                                                Of course, with #! /usr/bin/env bash in the script, this wouldn’t have been a problem on my machine, but I maintain that it would be more productive if bash disabled all of its non-POSIX extensions when invoked as sh. More commonly, you see people use [[ ... ]] instead of [ ... ]. I’m still not even sure how [[ is different from [.

                                                                                                                                1. 1

                                                                                                                                  I’m still not even sure how [[ is different from [.

                                                                                                                                  I vaguely know that and I switch out Bash for Python as soon as even simple string or array manipulation kicks in. My reasoning is that programs always become more complex over time. You start with simple string manipulation and soon end up with edge case handling and special cases. Best to switch to a language like Python which supports that.

                                                                                                                                  1. 1

                                                                                                                                    I think (among other things) [[ disables word splitting, i.e. you can use variables without needing to quote them. At least that’s how it works in ksh.

                                                                                                                              2. 4

                                                                                                                                Are they though? POSIX shell is so extremely limited that most scripts end up with some bashisms in them, if only to send a string to a program’s stdin. foo <<< "$var" is so much nicer than printf "%s" "$var" | foo (and no, you can’t use echo because POSIX is so vague that you can’t correctly tell echo to treat an argument as a string and not an option in a portable way)

                                                                                                                                I’ve made the mistake of thinking my script is POSIX, using #!/bin/sh, and then hearing from Ubuntu users that it doesn’t work in Dash. (I’ve also had problems where my script actually is POSX, but dash is so riddled with bugs that it still didn’t work on Debian-based systems.)

                                                                                                                                1. 6

                                                                                                                                  That’s why it’s a good idea to always lint your scripts with shellcheck. It will catch all bashisms when you set shellbang to /bin/sh

                                                                                                                                2. 3

                                                                                                                                  In which case -o pipefail is undefined, at least according to shellcheck.

                                                                                                                                  In any case I find running shellcheck on shell scripts to be really valuable. You don’t have to follow it blindly, but it has some very good hints.

                                                                                                                                  1. 1

                                                                                                                                    shellcheck is my favorite linter for bash scripts as well. It sometimes feels a bit aggressive but I prefer that as opposed to making a mistake.

                                                                                                                                  2. 2

                                                                                                                                    I respectfully disagree, using the /usr/bin/env method requires bash, so you can use all the bashisms you want, and require users of your scripts to have bash installed.

                                                                                                                                    For 95% (wild-ass guess) of Unix users, this is the case. For the rest, if you’ve stated the requirements clearly, they’ll either accept the requirement or form a large enough crowd with pitchforks to force you to change.

                                                                                                                                    1. 2

                                                                                                                                      *BSD, Solaris, and Android don’t have bash installed by default and it isn’t part of their base systems so it isn’t available until /usr/local is mounted and can’t be used anywhere in early boot. Ubuntu uses dash as /bin/sh, but does install bash by default. Embedded Linux systems typically use busybox or similar and so don’t have bash.

                                                                                                                                      95% of Unix users probably covers all Android users, so that’s not a particularly useful metric. The interesting metric is the set of people you expect to use your script. If it’s GNU/Linux users, that’s fine. If it’s people running large desktop / server *NIX systems (macOS, *BSD, Linux), bash is probably available easily but it may not be installed on any given system so you’ve added a dependency.

                                                                                                                                      By all means, if you depend on bashism then use bash. In general, I’ve found that my shell scripts fall into two categories:

                                                                                                                                      • Simple things, where the POSIX shell is completely adequate and so there’s no reason to use anything other than /bin/sh.
                                                                                                                                      • Complex things where I actually want a rich scripting language. Bash is a pretty terrible scripting language, but it’s a much lighter weight dependency than something like Python so I’ll often use bash for these things.

                                                                                                                                      I use bash as my interactive shell on FreeBSD, so I don’t really mind having it as a dependency for things, but I consider it somewhat impolite to add dependencies in things I distribute when I don’t really need them.

                                                                                                                                    2. 2

                                                                                                                                      Why #!/usr/bin/env bash as opposed to #!/bin/bash tho? I’ve seen that done for ruby scripts, and I figured it’s because maybe ruby is in /usr/local/bin, but isn’t bash always in /bin/bash?

                                                                                                                                      1. 10

                                                                                                                                        Not on all systems. e.g. on FreeBSD and OpenBSD it’s /usr/local/bin/bash.

                                                                                                                                        1. 7

                                                                                                                                          Or Nix and Guix, which have them in some entirely different place (e.g. on Guix is something like /gnu/store/.../).

                                                                                                                                        2. 4

                                                                                                                                          in the BSDs it should be under /usr/local/bin or /usr/pkgs/bin. MacOS ships an old version of bash and one might want to install a newer version from homebrew, which goes under /usr/local/bin

                                                                                                                                          1. 1

                                                                                                                                            Using env bash let me both use a newer homebrew bash on Macs, and keep the script working on Linux machines, and possibly even Windows via WSL. On Macs the newer homebrew bash was needed for a switch statement, if memory serves.

                                                                                                                                            For ruby, chruby lets one install multiple versions on one system, and flip between them.