I don’t know what architecture the author had in mind, but this is not universal truth:

    Your first instruction is at the address 0xFFFFFFF0.

    1. 4

      Unfortunately, as the linker grew and evolved, it retained its lack of structure, and our sole Turing award winner retired.

      Wait, did Ken Thompson retire? Well deserved, if so!

      1. 2

        A few years ago Rob Pike said in a talk that Ken retired but was still involved in fundamental decisions.

        1. 13

          Every time I hear of a CS giant retiring I think of Knuth’s comments on his retirement decision:

          Being a retired professor is a lot like being an ordinary professor, except that you don’t have to write research proposals, administer grants, or sit in committee meetings. Also, you don’t get paid.

      1. 1

        Having a plain HTML website without any CSS is not a sin, I think.

        1. 2


          I’m very curious, what is the platform you mentioned C has no support on?

          1. 4

            I was referring to Windows, which has C support but no sane API/environment to run it in.

            1. 2

              You mean it doesn’t support POSIX?

              1. 1

                Not if you want luxuries such as being able to open a file with non-ASCII characters, or include any system header that is newer than K&R C.

                And then there’s MSVC, which appears to be (un)maintained out of spite (Microsoft thinks C is obsolete, and works to make it so).

          1. 1

            Wikipedia has a succinct version of it:

            1. Avoid complex flow constructs, such as goto and recursion.
            2. All loops must have fixed bounds. This prevents runaway code.
            3. Avoid heap memory allocation.
            4. Restrict functions to a single printed page.
            5. Use a minimum of two runtime assertions per function.
            6. Restrict the scope of data to the smallest possible.
            7. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
            8. Use the preprocessor sparingly.
            9. Limit pointer use to a single dereference, and do not use function pointers.
            10. Compile with all possible warnings active; all warnings should then be addressed before release of the software.


            1. 3

              It’s very refreshing that modern languages can make improvements every few months, and users actually adopt them.

              I’m incredibly frustrated that in C nothing ever gets fixed. Not even a smallest thing. Even if the C standard fixes some wart, it takes literally 20 or 30 years before major projects allow it.

              1. 2

                I’m really curious what should be fixed in C (besides some operators having the “wrong” precedence).

                Side node: I once asked an automotive supplier what language they use. He said: “C, because it has to be compilable[1] 20 years from now.”

                So I guess “nothing ever getting fixed in C” is one of the reason C is so popular in industry.

                [1] That was a few years ago and I don’t remember whether he explicitly said “compilable”. But I think it was deducible from context.

                1. 3

                  I’m not sure if this is a good place for a laundry list of my C pet peeves. And I’m afraid that if I list small ones, they’ll be easy to shoot down as unnecessary, and big ones as making it C++ all over again… but here it goes:

                  • Safe way to detect numeric overflow. Unhandled overflows in malloc(len * size) are common and super dangerous. Signed overflow is UB. Integer promotions may make arithmetic unexpectedly signed. So you have to be super duper careful how you check for overflow, or you’ll make it worse and accidentally “prove” to the compiler that it can delete your safety checks.

                  • Syntax for intended case fall-through, so that compilers can reliably warn about unintended ones.

                  • Default variables to initialized, and like D require = void for cases where uninit is actually wanted. Uninitialized data is painful to debug, and even worse is that the UB gives the optimizer a license to kill. I’ve wasted soo much time on weird bugs because of this, and it has never made a difference in performance (since the optimizer can eliminate redundant initializations anyway).

                  • Some UB is necessary for performance (e.g. without signed overflow being UB indexing by int would be slow), but there’s also a ton of UB for 70s weirdo machines, segmented memory, non-8-bit-bytes. I’m sure a ton of it could be dropped without regret.

                  • A standard way to silence warnings from the code itself. I like to have pedantic warnings, but sometimes I have to tell the compiler that I’ve reviewed the warning and it’s OK. In C this requires either non-standard attribute or pragma, or silly tricks to hide the offending code from the compiler, which may backfire by causing other warnings in another compiler.

                  • Actual immutability. const is a joke. Const pointer to const data paradoxically means mutable data (due to wrong variance). const guarantees are too weak for optimizers to use them for anything.

                  • Non-nullable pointers. ObjC has them. Syntax is of course ugly, but they’re very useful.

                  • I could do without the ritual of include guards. It feels like absolutely necessary best practice only because include is dumb, and there’s no include_once.

                  • If there was a mechanism like Go’s defer, it’d spare me from so so many goto cleanups or deeply nested else free. In code where I really have to care about correctness and memory leaks I feel like 80% of functions is dedicated to error handling. OTOH in Rust it’s just ? in a few places, and it’s even more reliable.

              1. 9

                Would love to see some more infos about build quality, battery life, touchpad performance, how many nits can the display deliver and so on.

                A friend bought a 2015 model (I believe) and he was not happy with the overall build quality. But I had the chance to have the newer InfinityBook model in my hands for a short moment and I have to say that it felt much better (build quality-wise).

                Glad to see more Linux-first devices. Tuxedo seems to be a smaller German manufacturer. Is this CLEVO hardware? Do they support fwupd?

                1. 3
                  1. 2

                    Thanks for the feedback. Since I got quite a few hardware-detail related questions, I will write a follow-up blogpost covering those. I’ve also approached the vendor to see whether there are more details that can be covered.

                    1. 1

                      Definitely interested in a follow up on this.

                    2. 2

                      A colleague of mine had a Tuxedo notebook but this thing looked rather Chinese than German. (I don’t know what version it was, though.)

                    1. 4

                      Beware, the article is written in an explicit KJV jargon :)

                      Regarding Rule 9:

                      Thy external identifiers shall be unique in the first six characters

                      Did ancient C compilers consider only the first 6 characters being significant?

                      1. 9

                        Ancient linkers sometimes only treated the first 6 characters as significant. A hangover from Fortran usage.

                        1. 3

                          Oh. OH! That’s why it says externally visible identifiers, haha! statics would normally not be the linker’s problem. :)

                        2. 6

                          Beware, the article is written in an explicit KJV jargon

                          Bah, I prefer my subliterate parodies regarding programming to be in the original Aramaic.

                          1. 3

                            And written on parchment…

                            1. 3

                              Excuse me, papyrus is the only acceptable substrate for documentation

                            2. 2

                              For a parody of the Ten Commandments, don’t you mean Hebrew?

                              I found the faux KJV pretty grating. Even skimming the headings, I don’t think I got past number 5.

                              1. 1

                                You’re correct, this is what I get for my unresearched quips.

                          1. 3

                            You can probably get more insight by grabbing some deb, e.g.

                            apt-get download coreutils

                            and extracting it

                            dpkg --extract coreutils_8.26-3_amd64.deb /tmp/coreutils
                            1. 1

                              hwj e-mailed 2 days ago

                              Wait, you can interact with lobste.rs over email?!

                              edit: wow, you can, with ‘mailing list mode’! That’s amazing!

                              1. 1

                                Yes, this is something HN has not ;) I also use mutt to filter for posts I’m interested in.

                            1. 3

                              There are already a lot of programming languages that “compile” (transpile?) to C.

                              Socially, it might be true:

                              • “the kernel guys write in assembly” -> “the kernel guys write in C”
                              • “your daemon is in assembly! rewrite it in C!” -> “your daemon is in C! rewrite it in Rust¹

                              At some point every popular open source project in C gets to face the question: Do I ignore these Github issue to RIIR¹. Some do, some other do not.

                              Programming languages becomes communities more than projects: borrowing foreign code is the common place, and use 0 dependencies becomes the exception, even for libraries.

                              C does not have this feature of running “cc fetch” to download all libraries (pkg_add, pkg, apk, apt, yum, pacman, brew, … does it instead). And nobody but package maintainer ever deal with GNU’s autoconf mess (sorry for the rudeness, but spending time with it does not help).

                              This is the determinant difference that make so many C, C++ and Java developers look for fresh meadows to sleep on.

                              Rust and Go are taking the full attention with C being still actively used for less categories of softwares, but, like ASM did, but Rust and Go do not compile to C akin to C compiling to ASM.

                              Rust is backed by LLVM² and Go has its own architecture-independent intermediate assembly language³ to split the compilation in two steps. So Rust/Go -> C -> ASM -> machine code is wrong. It is different.

                              1. 1

                                Actually Go’s approach to assembly may have changed a bit since I last saw it but the principles of keeping as much as possible architecture-independent is still there. The details are in .

                                1. 2

                                  Interesting. I’m subscribed to golang-dev@ but haven’t heard of this. Do have an example?

                                  1. 2

                                    One mailing list to subscribe to! (might as well Go for Rust one).

                                    I’m only knowledgeable for a few bits and do not have the full picture, but from what I saw :

                                    1. 2

                                      The rust-dev mailing list has been defunct for years. You probably want https://internals.rust-lang.org

                              1. 3

                                just a fyi, you can do all of the apt-* commands as just apt now (apt install package)

                                1. 1

                                  Thanks. The muscle memory isn’t easy to rewire ;)

                                1. 1

                                  For those unaware how small Lua actually is: it’s BNF grammar has only 23 rules.

                                  Source: http://www.lua.org/manual/5.3/manual.html#9

                                  1. 13

                                    I know analogies aren’t supposed to be 100% accurate but I like pushing on them to see what happens, so:

                                    • The curve is actually more like a tilde: if you wash dishes immediately, it’s much less expensive than washing them later. The crumbs aren’t set on and are easier to remove, you can use cold water, and you can dry more stuff in the rack (so you don’t need a towel).
                                    • Dishwashers are much more water-efficient than washing by hand. It’s also safer and kills more bacteria. On the other hand, there are some things you can’t dishwash, like cooking knives and cast iron pots. Deploy in bulk, with some artisanal services being manual?
                                    • Isn’t the “fleet of dishwashing robots” equivalent to hiring a person to just wash dishes fulltime? So it would be like having manual deploys, but have one person on the team dedicated to doing nothing but that.
                                    • Where do disposable plates fit into this?
                                    1. 4

                                      Disposable plates are one-off shell scripts or data cleansing scripts you write to pull off that one migration for the client with a big contract.

                                      1. 2

                                        Or perl-oneliners you can’t decipher a few seconds after composing them ;)

                                      2. 2

                                        Isn’t the “fleet of dishwashing robots” equivalent to hiring a person to just wash dishes fulltime?

                                        Hmm… I had a different interpretation of this. I thought this was more akin to having a number of zero-hour contract staff available (except the staff are obliged to take the work). The technical equivalent here being that instead of one machine always available to handle incoming work (which is queued), a machine would be dedicated to every item in the queue, so there is effectively no queue.

                                        1. 1

                                          a machine would be dedicated to every item in the queue, so there is effectively no queue.

                                          That’s exactly what I intended with that analogy, yes :)

                                        2. 1

                                          On the other hand, there are some things you can’t dishwash, like cooking knives and cast iron pots.

                                          Technically you can, but you’re creating more maintenance for yourself, what with sharpening and seasoning.

                                        1. 13

                                          I’ll note the other thing the announcment says is “On the other hand the level of interest for this architecture is going down, and with it the human resources available for porting is going down” and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

                                          I’d expect Debian would be fine keeping it if there was people willing to maintain it, but if there isn’t then it’s better it gets dropped rather than keep decaying further. Also, IIRC this has happened before for some items like this, if there are in fact lurking people willing to maintain MIPS then this might get reversed if volunteers come to light as a result of this announcment.

                                          1. 4

                                            “Might” being the key word; a whole group of us got together to try to “save” ppc64 and Debian wasn’t interested, more than likely because we weren’t already Debian developers. It’d be nice if the “ports” system was more open to external contributions. But mips isn’t even going to ports, it’s being removed.

                                            1. 3

                                              From my experience, if you aren’t already a Debian developer, you aren’t going to become one. My experience trying to contribute to it was absolutely miserable. I’ve heard that changed somewhat, but I don’t feel like trying anymore.

                                              1. 1

                                                Can you speak more to this issue? I’m curious as to whether it was a technical or social problem for you, or both.

                                                1. 3

                                                  More of a social problem. I wanted to package a certain library. I filed an “intent to package” bug, made a package, and uploaded it to the mentors server as per the procedure. It got autoremoved from there after a couple of months of being ignored by people supposed to review those submissions. Six months later someone replied to the bug with a question whether I’m going to work on packaging it.

                                                  I don’t know if my experience is uniquely bad, but I suspect it’s not. Not long ago I needed to rebuild a ppp package from Buster and found that it doesn’t build from their git source. Turned out there’s a merge request against it unmerged for months, someone probably pulled it, built an official package and forgot about it in the same fashion.

                                                  Now three years later that package is in Debian, packaged by someone else.

                                                  1. 2

                                                    I don’t know if my experience is uniquely bad, but I suspect it’s not.

                                                    Seem’s like you’re right: https://news.ycombinator.com/item?id=19354001

                                            2. 3

                                              …and the author of this post isn’t offering to step up and maintain it (either for Debian or the other two distros they mention).

                                              From the author’s github profile:

                                              Project maintainer of the Adélie Linux distro.

                                              1. 0

                                                Hmm, maybe. I’d bet against it. If Debian is going (reading between the lines) “maintaining modern software on this architecture is getting really hard” then I’d bet against anyone else adding support. Maybe I’ll lose that bet, in which case I owe someone here several beers, but I’ll be very surprised!

                                            1. 2

                                              Looks like a slam dunk for a lot of users currently on Git. It’s also written by the competition. Any pro-Git people want to counter any of it?

                                              1. 18

                                                I’m not really pro-git but I some of the strawman arguments can be easily countered.

                                                Fossil is more efficient? Then how comes that git scales better? And git does not scale that well either if you look at what Microsoft had to do to it for 300GB. Describing Gitlab has “a third-party extension to Git wrapping it in many features, making it roughly Fossil-equivalent, though much more resource hungry and hence more costly” is so wrong it’s funny. At least, compare with Gitea.

                                                The article discusses the design philosophies but never considers Torvalds design criteria like speed.

                                                Fossil uses the most popular database in the world? You could also say git uses an even more popular one called “file system”.

                                                Git is POSIX only? Git certainly has more windows users than fossil has users in total.

                                                Fossil has many check-outs per repository? Git also.

                                                Fossil remembers what you actually did? Git can do that just as well. Git provides a choice where fossil is opinionated.

                                                In general, this is a one-sided pro-fossil page. Not surprising given the domain. It tells you a lot about the design philosophy behind fossil and is valuable because of that. It is not an honest comparison though.

                                                1. 11

                                                  Honestly? I don’t actually care which SCM system I use, as long as it talks to GitHub, because that’s where all my code and all my employer’s code lives.

                                                  Furthermore, I use maybe three or four Git commands 99.9% of the time and I have them memorized (and aliased!), so why would I ever switch?

                                                  People complain about Git’s UI and I wonder what it is they’re doing that I don’t do that makes it such a big deal for them. I’m not saying they’re wrong, but I kind of suspect that most devs are more like me, they treat SCM as a black box utility, like the CPU or the memory, and that’s why we’re moving toward a monoculture, because it just doesn’t matter to most people.

                                                  1. 11

                                                    Not a pro-Git, but when I considered switching to Fossil there was no (easy) equivalent of git rebase -i.

                                                    That was probably my biggest complaint.

                                                    1. 4

                                                      The article explains that this is a deliberate design choice.

                                                      1. 22

                                                        That choice is a total dealbreaker for me. I never use SCM to record development history as it happened, with all the stumbles, experiments, dead-ends, and “oops, forgot a semicolon in the previous commit”. I use it to produce sets of changes that are easy to review and revert if necessary.

                                                        1. 11

                                                          I agree, except that I absolutely do use local SCM to record all of that nonsense history, so I have the freedom to try different approaches and make mistakes without losing work. Obviously I don’t push all that crap; I clean it up with rebase -i before letting anyone else see it. It baffles me how this could be considered bad practice at all, much less “history written by the victors”.

                                                          (Edit: I wonder if people are just talking past each other here — perhaps the anti-rebasers think you would rebase after pushing, which would be very rude — and the benefits of rebasing locally while working just don’t occur to them.)

                                                          1. 5

                                                            I’ve used fossil in the past and my workflow was to have two trees: one with a “history as it really happened” and one that I rsync the completed work to and commit for a “clean” history. The latter gets shared and later rsynced over the other messy repo since I didn’t care about how I got to the end state.

                                                            1. 3

                                                              I did something like that in the olden SVN days. cp -R . ../asdfaasdf was my git stash.

                                                          2. 8

                                                            I liked every part of the post except the rebase commentary. Everywhere else it does a great job of comparing fossil to git in terms of design motivation vis-à-vis team size. But for rebasing the criticism waxes philosophical without acknowledging different perspectives.

                                                            One commentator characterized Git as recording history according to the victors, whereas Fossil records history as it actually happened.

                                                            To me, a rebasing workflow is analogous to compare-and-swap operations, making development lock-free. I make a change, I try to check in the change, if that fails I back off and retry. The history of my retries is no more permanent than the history of transaction rollbacks in a SQL database.

                                                            I can see why that history might be useful for small teams, who observe the progress of each other’s branches. But right now at work I have 15 branches on deck, blocked on other teams or on hold while I work on something higher priority. My team doesn’t care about any of those branches until I send them for code review, and they don’t care about any versions of my change except the last one.

                                                            Depending on the time of day, dozens to hundreds of mainline merges are submitted per second. No one cares about the fully accurate history of every developer’s workflow, the deluge of finalized commits is plenty.

                                                            Now with all that said, I am 100% convinced by this post to use fossil for personal projects. I have notes I’d love to publish as blog posts, but no motivation to operate a web app, build a static site, or use some hosted solution. And I often do wish I could go back and see my thought process for personal code that I pick up again after months, sometimes years.

                                                        2. 9

                                                          This recent lobste.rs comment by the “game of trees” author illustrates some downsides, mostly related to scaling.

                                                        1. 7

                                                          Somehow OpenBSD, having a fraction of Linux’s manpower, supports quite a few platforms.

                                                          Source: https://www.openbsd.org/plat.html

                                                          1. 3

                                                            All of them self-hosting too.

                                                          1. 1

                                                            At my job we switched from Jenkins to TeamCity. According to a colleague TeamCity is better. But it’s not that great either. The UI is so heavy (with JavaScript?) that my old notebook stopped to render it properly after half an hour or so. We also had problems to connect it to the company’s DC/LDAP.

                                                            I’m not familiar with other off-the-shelf solutions, but if I’d need one I’d probably just use a cronjob (or git-hook) that

                                                            • pulls the repository,
                                                            • makes it
                                                            • and dumps the appropriate files into /var/www/
                                                            1. 2

                                                              Off-load functionality to native Vim features or generic plugins when they offer a good user experience. Implement as little as reasonable.

                                                              Good choice.

                                                              I gave up on vim-go mostly because it kept accumulating features I didn’t care about.

                                                              1. 0

                                                                I tried out mutt a few weeks ago. It was blazing fast and I was excited to be able to use powerful macros. However, there are many emails I get that I want to be in HTML - my company newsletter and film newsletters to be exact. The escape latch from text is not easy to use in mutt. It requires setting up a trigger to save the email to a file and open it in a browser. We have to acknowledge the reality and see that people do want to receive some emails in HTML.

                                                                1. 4

                                                                  I use mutt and here’s what I do: I have a .mailcap file with the following entries:

                                                                  text/html;/usr/bin/lynx -force-html %s
                                                                  text/*	copiousoutput

                                                                  Then within mutt, if I’m reading an email I know is in HTML, I hit ‘v’, then select the HTML section and it launches lynx with the HTML portion. All other text types are handled by mutt directly.

                                                                  1. 1

                                                                    This does help with HTML-formatted mostly-text emails. But my point is that many emails I want to read have images in them and I want the full fidelity of a browser to read them.

                                                                    I had previously tried: text/html; /usr/bin/google-chrome ‘%s’; test=test -n “$DISPLAY”;

                                                                    1. 1

                                                                      I have the same problem as you: I subscribe to a bunch of newsletters about movies and comics and it’s kind of the point to receive an HTML email that can show you the pictures. I also tried mutt and was almost happy with it until I ran against the arcane knowledge of mailcap files and my inability to make a config that (1) works and (2) is cross-platform (since I’m using an obligatory dotfiles repository across my machines). I’d looooove for someone to write a tutorial for that kind of stuff.

                                                                    2. 1

                                                                      Instead of viewing the HTML in a browser you can dump it into the pager:

                                                                      muttrc: auto_view text/html alternative_order text/plain text/html

                                                                      mailcap: text/html; links -assume-codepage utf8 -dump %s; copiousoutput

                                                                  1. 2

                                                                    I don’t see the point of these wrappers. I have a git repo and there is an install.sh script in there.

                                                                    1. 1

                                                                      I used to have an mkfile that served the same purpuse as your script. Now I’m just using stow[0].

                                                                      Here’s an example for mantaining an email setup (assuming it consists of mutt and mbsync):

                                                                      $ tree ~/dotfiles/mail
                                                                      |-- .muttrc
                                                                      |-- .mbsyncrc

                                                                      Then you can deploy it with

                                                                      cd ~/dotfiles
                                                                      stow mail

                                                                      [0] https://www.gnu.org/software/stow/