1.  

    This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

    It’s just like learning a human language: Until you use the vocabulary enough to get comfortable, you are going to feel lost, and spend a lot of time getting friendly with a dictionary.

    1.  

      The difference is that Python already is organized by the standard library, and has cookbooks, and doesn’t involve holistically thinking of the entire transformation at once. So it intrinsically has the problem to a lesser degree than APLs do and also has taken steps to fix it, too.

      1.  

        This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

        I think this probably misses the point. The Python solution was able to compose a couple of very general, elementary problem solving mechanisms (iteration, comparison), which python has a very limited vocabulary of (there’s maybe a half dozen control constructs, total?), to quickly arrive at a solution (albeit while a limited, non-parallel solution, one that’s intuitive and perhaps 8 times out of 10 does the job). The standard library might offer an implementation already, but you could get a working solution without crawling through the docs (and you could probably guess the name anyways).

        J required, owing to its overwhelming emphasis on efficient whole-array transformation, selection from a much much much larger and far more specialized set of often esoteric constructs/transformations, all of which have unguessable symbolic representations. The documentation offers little to aid this search, complicating a task that was already quite a bit less intuitive than Python’s naive iteration approach.

      1. 5

        Wow, I hadn’t thought of the huge change automotive is facing. This article makes the point that current fleets sit unused the majority of the time, but a properly implemented self-driving fleet will be in use nearly 100% of the time. That’s a big change in the needs of the parts.

        I live in a cold climate and parts for automotive need to withstand -40 (both C and F, they’re the same). I wonder if we’ll see vehicles that aren’t designed for this because they’ll be in use all the time and won’t sit still long enough to get that cold. Of course now that I think of it, this would not really adversely affect the ICs (topic of this submission), more the mechanical and specialized parts (LCD displays, etc).

        1. 4

          While the cold wouldn’t adversely effect the ICs in the long term, if they get cold enough, they’ll temporarily stop working as the electron occupancy of the valence bands drops. Practically, this means that your car computer will fail to boot if it gets too cold.

          Which, I guess, is a long winded way of saying that the design constraints don’t change much for the electronics.

          1. 3

            At sufficiently high doping, it becomes basically impossible to freeze out the dopants, because a Mott band forms. Essentially, the dopant radiuses begin to overlap, forming a partially filled conduction band.

            CMOS chips run just fine at 77K.

        1. 2

          Not getting accepted to the Google SoC? Sounds like they’re pretty…. salty.

          1. 3

            You can even say they there something a bit fishy about this whole response!

            More seriously, Summer of code was always a long shot. Tiny org, tiny userbase, no alignment with Google’s long term vision, no connections with the selection committee. There’s no surprise that we were rejected, and if I were running Summer of Code I’d probably have rejected us too.

            This… uh.. program.. actually started off as a typo – but I was amused by the idea of people getting cod, so I decided to go with it. It’s just a joke carried through to ridiculousness, and there’s really no bigger commentary in it.

            I just thought it would be funny to be the kind of project that ships fish to people as a ‘thank you’. Getting rejected from Summer of Code was an excuse.

          1. 3

            Definitely will consider it!

            1. 2

              We’d love to have you!

            1. 22

              This article is great except for No 3: learning how hardware works. C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models. The article then mentions computer architecture and assembly are things they teach students. Those plus online articles with examples on specific topics will teach the hardware. So, they’re already doing the right thing even if maybe saying the wrong thing in No. 3.

              Maybe one other modification. There’s quite a lot of tools, esp reimplementations or clones, written in non-C languages. Trend started getting big with Java and .NET with things like Rust and Go making some more waves. There’s also a tendency to write things in themselves. I bring it up because even the Python example isn’t true if you use a Python written in Python, recent interpreter tutorials in Go language, or something like that. You can benefit from understanding the implementation language and/or debugger of whatever you’re using in some situations. That’s not always C, though.

              1. 14

                Agreed. I’ll add that even C’s status as a lingua franca is largely due to the omnipresence of unix, unix-derived, and posix-influenced operating systems. That is, understanding C is still necessary to, for example, link non-ruby extensions to ruby code. That wouldn’t be the case if VMS had ended up dominant, or lisp machines.

                In that way, C is important to study for historical context. Personally, I’d try to find a series of exercises to demonstrate how much different current computer architecture is from what C assumes, and use that as a jumping point to discuss how relevant C’s semantic model is today, and what tradeoffs were made. That could spin out either to designing a language which maps to today’s hardware more completely and correctly, or to discussions of modern optimizing compilers and how far abstracted a language can become and still compile to efficient code.

                A final note: no language “helps you think like a computer”. Our rich history shows that we teach computers how to think, and there’s remarkable flexibility there. Even at the low levels of memory, we’ve seen binary, ternary, binary-coded-decimal, and I’m sure other approaches, all within the first couple decades of computers’ existence. Phrasing it as the original author did implies a limited understanding of what computers can do.

                1. 8

                  C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models.

                  I keep hearing this meme, but pdp11 hardware is similar enough to modern hardware in every way that C exposes. Except, arguably, with the exception of NUMA and inter-processor effects.

                  1. 10

                    You just countered it yourself even with that given prevalence of multicores and multiprocessors. Then there’s cache hierarchies, SIMD, maybe alignment differences (memory is fuzzy), effects of security features, and so on.

                    They’d be better of just reading on modern, computer hardware and ways of using it properly.

                    1. 6

                      Given that none of these are represented directly in assembly, would you also say that the assembly model is a poor fit for modeling modern assembly?

                      I mean, it’s a good argument to make, but the attempts to make assembly model the hardware more closely seem to be vaporware so far.

                      1. 6

                        Hmm. They’re represented more directly than with C given there’s no translation to be done to the ISA. Some like SIMD, atomics, etc will be actual instructions on specific architectures. So, Id say learning hardware and ASM is still better than learning C if wanting to know what resulting ASM is doing on that hardware. Im leaning toward yes.

                        There is some discrepency between assembly and hardware on highly-complex architectures, though. The RISC’s and microcontrollers will have less, though.

                    2. 1

                      Not helped by the C/Unix paradigm switching us from “feature-rich interconnected systems” like in the 1960s to “fast, dumb, and cheap” CPUs of today.

                    3. 2

                      I really don’t see how C is supposed to teach me how PDP-11 hardware works. C is my primary programming language and I have nearly no knowledge about PDP-11, so I don’t see what you mean. The way I see it is that the C standard is just a contract between language implementors and language users; it has no assumptions about the hardware. The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.

                      1. 1

                        As in this video of its history, the C language was designed specifically for the hardware it ran on due to its extremely-limited resources. It was based heavily on BCPL, which invented “programmer is in control,” that was what features of ALGOL could compile on another limited machine called an EDSAC. Even being byte-oriented versus word-oriented was due to PDP-7 being byte-oriented vs EDSAC that allowed word-oriented. After a lot of software was written in it, two things happened:

                        (a) Specific hardware implementations tried to be compatible to it in stack or memory models so that program’s written for C’s abstract machine would go fast. Although possibly good for PDP-11 hardware, this compatibility would mean many missed opportunities for both safety/security and optimization as hardware improved. These things, though, are what you might learn about hardware studying C.

                        (b) Hardware vendors competing with each other on performance, concurrency, energy usage, and security both extended their architectures and made them more heterogenous than before. The C model didn’t just diverge from these: new languages were invented (esp in HPC) so programmers could easily use them via something that gives a mental model closer to what hardware does. The default was hand-coded assembly that got called in C or Fortran apps, though. Yes, HPC often used Fortran since it’s model gave them better performance than C’s on numerical applications even on hardware designed for C’s abstract machine. Even though easy on hardware, the C model introduced too much uncertainty about programmers’ intent for compilers to optimize those routines.

                        For this reason, it’s better to just study hardware to learn hardware. Plus, the various languages either designed for max use of that hardware or that the hardware itself is designed for. C language is an option for the latter.

                        “ it has no assumptions about the hardware”

                        It assumes the hardware will give people direct control over pointers and memory in ways that can break programs. Recent work tries to fix the damage that came from keeping the PDP-11 model all this time. There were also languages that handled them safely by default unless told otherwise using overflow or bounds checks. SPARK eliminated them for most of its code with compiler substituting pointers in where it’s safe to do so. It’s also harder in general to make C programs enforce POLA with hardware or OS mechanisms versus a language with that generated for you or having true macros to hide boilerplate.

                        “ The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.”

                        You can implement any piece of hardware as a software-level interpreter. It’s just slower. Simulation is also a standard part of hardware development. I don’t think whether it can be interpreted matters. Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                        1. 3

                          I admit that the history of C and also history of implementations of C do give some insight into computers and how they’ve evolved into what we have now. I do agree that hardware, operating systems and the language have been all evolving at the same time and have made impact on each other. That’s not what I’m disagreeing with.

                          I don’t see a hint of proof that knowledge about the C programming language (as defined by its current standard) gives you any knowledge about any kind of hardware. In other words, I don’t believe you can learn anything practical about hardware just from learning C.

                          To extend what I’ve already said, the C abstract machine is sufficiently abstract to implement it as a software interpreter and it matters since it proves that C draws clear boundaries between expected behavior and implementation details, which include how a certain piece of hardware might behave. It does impose constraints on all compliant implementations, but that tells you nothing about what “runs under the hood” when you run things on your computer; an implementation might be a typical, bare-bones PC, or a simulated piece of hardware, or a human brain. So the fact that one can simulate hardware is not relevant to the fact, that you still can’t draw practical assumptions about its behavior just from knowing C. The C abstract machine is neither hardware nor software.

                          Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                          What people do with hardware is directly related to knowledge about that particular piece of hardware, the language implementation they’re using, and so on. That doesn’t prove that C helps you understand that or any other piece of hardware. For example, people do study assembly generated by their gcc running on Linux to think about what their Intel CPU will do, but that kind of knowledge doesn’t come from knowing C - it comes from observing and analyzing behavior of that particular implementation directly and behavior of that particular piece of hardware indirectly (since modern compilers have to have knowledge about it, to some extent). The most you can do is try and determine whether the generated code is in accordance with the chosen standard.

                          1. 1

                            In that case, it seems we mostly agree about its connection to learning hardware. Thanjs for elaborating.

                    1. 12

                      Downloading and installing (even signed) packages over unencrypted channels also allows an attacker with the ability to inspect traffic to be able to take an inventory of the installed software on the system. An attacker could use that to his/her advantage by knowing which software, and its vulnerabilities, is installed. The attacker then has the exact binary and can replicate the entire system, tailoring exploits to the inventory on the target system.

                      1. 21

                        They cover this in the linked page; they claim there’s such a small number of packages that merely knowing the length of the ciphertext (which, of course, HTTPS can’t hide) is enough to reliably determine which package is being transmitted.

                        Perhaps doing it over HTTP2, so you get both encryption and pipelining, would get you sufficient obfuscation, but HTTPS alone doesn’t.

                        1. 2

                          I’m not sure how http2 helps. You still can generally take a look at traffic bursts and get an idea of how much was transferred. You’d have to generate blinding packets to hide the actual amount of traffic that is being transferred, effectively padding out downloads to the size of the largest packages.

                          1. 2

                            But figuring out which packages would require solving the knapsack problem, right? Instead of getting N requests of length k_i, you get one request of length \sum k_i. Although, now that I think about it, the number of packages that you download at once is probably small enough for it to be tractable for a motivated attacker.

                            Padding is an interesting possibility but I think some of the texlive downloads are >1GB; that’s a pretty rough price to pay to download vi or whatever.

                          2. 1

                            True. Given that each package download is its own connection, it wouldn’t be too difficult for an attacker to deduce which package is being downloaded given the size of the transmitted encrypted data. The attacker would need to keep a full mirror of the package repo (disk space is cheap, so plausible). I wonder if the same would apply to package repos served over Tor Onion Services.

                        1. 2

                          Perhaps it would be more useful to ask people not to derail technical posts with meta-discussion about communication style and behavior. It’s a regular occurrence in this community, and not restricted to mailing list threads.

                          1. 8

                            Many of these submitted mailing list threads aren’t really submitted for their technical content in the first place, though— they’re explicitly submitted because they were a flamewar and people like to gawk at flamewars, so that’s kind of on-topic to discuss imo. The only particularly interesting thing about the recent Torvalds submission, for example, is the flaming. Presumably that’s why the submitter chose to include an all-caps quote, “COMPLETE AND UTTER GARBAGE” in the submission title, rather than highlighting any technical content. I’m going to go out on a limb and predict that if it had a technical title instead of a flamewar title, it wouldn’t have gotten the attention here that it did. (The little technical content the linked post has turns out further down the thread to not even be correct.)

                            At the very least, when people are linking gawk-at-the-flamewar type mailing list posts, can I suggest tagging them with the rant tag?

                            1. 3

                              The only particularly interesting thing about the recent Torvalds submission, for example, is the flaming.

                              He accuses Intel of planning to not to fix the specter bug, as in they want to provide a workaround off by default since it would impact their performance metrics and shifting the responsibility to OS vendors. That’s far more interesting than flaming and worth the submission in itself.

                              So the IBRS garbage implies that Intel is not planning on doing the right thing for the indirect branch speculation.

                              It’s not “weird” at all. It’s very much part of the whole “this is complete garbage” issue.

                              The whole IBRS_ALL feature to me very clearly says “Intel is not serious about this, we’ll have a ugly hack that will be so expensive that we don’t want to enable it by default, because that would look bad in benchmarks”.

                              So instead they try to push the garbage down to us. And they are doing it entirely wrong, even from a technical standpoint.

                              source: http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04628.html

                              1. 5

                                http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04630.html http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04637.html

                                The next 2 emails show that Linus has misread the patch.

                                You’re looking at IBRS usage, not IBPB. They are different things.

                                Yes, the one you’re looking at really is trying to protect the kernel, and you’re right that it’s largely redundant with retpoline. (Assuming we can live with the implications on Skylake, as I said.)

                                (I pointed that out in the lobste.rs thread, and that’s kind of the thing I was annoyed about)

                                1. 3

                                  FWIW, if you look at the second email you linked…

                                  Ehh. Odd intel naming detail.
                                  If you look at this series, it very much does that kernel entry/exit stuff. It was patch 10/10, iirc. In fact, the patch I was replying to was explicitly setting that garbage up.
                                  And I really don’t want to see these garbage patches just mindlessly sent around.

                                  Linus seems to be claiming that he didn’t misread the patch.

                          1. 4

                            As far as I can tell, the speedup with learned indexes had little to do with the learning, and a good deal to do with the ease of implementing them in parallel. With massive parallelism comes the ability to speed up the algorithm by using GPUs.

                            Classical data structures with parallel lookups should work just as well (if not far better) than learned indexes in this case.

                            1. 6

                              The execution time of 50-ish nanoseconds is already in the range where memory latency may be dominant. If the mythical TPUs that execute a sufficiently large NN in a single cycle were glued on CPUs, perhaps the learned indexes could shave off a few cycles. AIUI using GPUs as they are now would make little sense because of massive latency.

                              Now could we have useful hash functions that execute in a single cycle?

                              1. 3

                                There are already many fast hash functions (clhash and falkhash) some of which take advantage of specialized hardware instructions.

                                It doesn’t seem worth it to add dedicated instructions when there are existing algorithms that hash at a fraction of a cycle per byte.

                                1. 2

                                  The things I’ve seen before are usually measured in cycles per byte with a low number of cycles. So, what do you mean execute in a single cycle? Are you saying you want it a byte a cycle, a specific number of bytes done in a cycle on a CPU, or that on a parallelized machine like a GPU? And how good does that hash function have to be in terms of collisions? Can it be like the textbook examples people use in day to day programming or a unique as cryptographic kinds provide?

                                  1. 2

                                    I wasn’t looking for an actual answer to that. The particular numbers selected for the examples are somewhat arbitrary.

                                    My line of thought was that if we’re doing hardware acceleration, then maybe instead of throwing a big neural net and a 120 teraflops TPU at the problem [and be still restrained by memory latency], we could use cuckoo hashes (or whatever) and accelerate some other link in the chain with far less hardware. A fast and secure hardware-accelerated hash function could actually prove useful, but that’s just one possibility.

                                    A fancy version could take a base address and a key and compute some number of hashes, fetch all the resulting addresses from memory, and then compare all the keys in parallel. Call it crazy for a CPU instruction. Such a massive TPU is pretty crazy too, and I’m not yet 100% convinced we’re going to get them on the CPU die :-)

                              1. 7

                                There’s two sets of issues:

                                1. The job requirements listed in most job postings are technologies, even though that isn’t really what the company is looking for.

                                2. Most resumes omit huge amounts of relevant information, and again often overfocus on technologies.

                                To become a senior developer you need some technical skills, but also to be able to work independently. That means being able to scope a project, know when to ask for help, prioritize, learn new technologies on your own, etc.. Almost no one puts this on their job postings because they can’t quite articulate it; instead they put years of experience or random list of technologies they use, conflating “knows this technology” with “will get started on this quickly/can operate independently”. Not the same thing at all.

                                On the resume side, often it’s “I did a thing!”. You also need to give context, why this thing was needed, and outcomes, why this thing was useful. And also there’s some stylistic stuff like, yes, no one reads the resume, they skim it at best.

                                So you need to make really sure it shouts INDEPENDENT WORKER YES I CAN WORK INDEPENDENTLY at the top, and it’s not buried 3/4 down the bottom of the page as an implied side-effect of a project scope that isn’t actually clear because the person interviewing doesn’t work at your company and so has no idea how impressive the thing you did actually was.

                                If you can share:

                                1. The skills you’ve seen listed, where you’ve said “I can do this!”.
                                2. Your resume.

                                then can probably give specific advice.

                                (I write a little bit about this here - https://codewithoutrules.com/2017/07/16/which-programming-skills-are-in-demand/ - but I should probably do a “here’s how you write a resume” blog post, since I have Opinions.)

                                1. 2

                                  Yeah I subscribe to your website, really enjoy the articles :) I’ll PM you my resume info. I was hoping to learn other’s experiences in this thread, not ask for unsolicited career advice.

                                  1. 1

                                    BTW, from your blog post:

                                    Learn the problem solving skills that employers will always need. That means gathering requirements, coming up with efficient solutions, project management, and so on.

                                    Learning these skills is one thing. Demonstrating that you’ve learned them is another. Hiring managers don’t just want to see “project management” listed on your resume, they want to be sure that you can actually perform those skills (after all, their hiring decision is a multi-thousand-dollar bet on you, they want to be sure that their bet pays off). Could you speak to some techniques one could use to demonstrate these skills?

                                    1. 2

                                      I’ll try to write a some more when I have time, but here’s a quick example on the resume level. Let’s say you’re applying to a job where you don’t know the technology stack. And sadly it’s a cold application with no one to introduce you. You’re sure you can do the job, but you need to convince them. So:

                                      • You probably want to say “I can learn new technologies quickly” in the first paragraph or two of the resume, because otherwise they might just skim your list-o-technologies, miss the thing they think they need, and drop your resume in the trash.
                                      • You also want to give a concrete example.
                                      • You want to demonstrate that the skill had real business value.

                                      So e.g. you can have opening paragraph or bullet list at the top of the front page that has bit saying “I can learn new technologies quickly, as I did at a recent project where I fixed a production critical bug on day one, even though I hadn’t worked in Java before.”

                                      1. 2

                                        As someone who is at a startup that’s currently hiring, I when I’m skimming a resume, I’m looking for experience, not skill lists.

                                        You can say “Project management” on your resume. But it’s far better to say “Was project manager for project x, successfully handling cross-team portions y and z. Project x shipped early and under budget”, and show me that you’ve managed projects.

                                        If you want to convince me that you’re a senior developer, your resume should reflect that you’ve been doing the kind of work that people expect from a senior developer. Leading projects, mentoring more junior colleagues, shipping major features, etc.

                                    1. 14

                                      “At a scale of billions of users, this has the effect of further reinforcing Google’s dominance of the Web.”

                                      You say this like it was an accident.

                                      1. 4

                                        First, to call itself a process could [simply] execute /proc/self/exe, which is a in-memory representation of the process.

                                        There’s no such representation available as a file. /proc/self/exe is just a symlink to the executable that was used to create the process.

                                        Because of that, it’s OK to overwrite the command’s arguments, including os.Args[0]. No harm will be made, as the executable is not read from the disk.

                                        You can always call a process with whatever args[0] you like. No harm would be done.

                                        1. 4

                                          Although /proc/self/exe looks like a symbolic link, it behaves differently if you open it. It’s actually more like a hard link to the original file. You can rename or delete the original file, and still open it via /proc/self/exe.

                                          1. -4

                                            No harm will be made, as the executable is not read from the disk.

                                            the executable is definitely read from the disk

                                            Again, this was only possible because we are executing /proc/self/exe instead of loading the executable from disk again.

                                            no

                                            The kernel already has open file descriptors for all running processes, so the child process will be based on the in-memory representation of the parent.

                                            no that’s not how it works, and file descriptors aren’t magic objects that cache all the data in memory

                                            The executable could even be removed from the disk and the child would still be executed.

                                            that’s because it won’t actually be removed if it’s still used, not because there’s a copy in memory

                                            <3 systems engineering blog posts written by people who didn’t take unix101

                                            1. 12

                                              Instead of telling people they are idiots, please use this opportunity to correct the mistakes that the others made. It’ll make you feel good, and not make the others feel bad. Let’s prop up everyone, And not just sit there flexing muscles.

                                              1. 3

                                                Sorry for disappointing you :)

                                                I got that (wrongly) from a code comment in Moby (please check my comment above) and didn’t check the facts.

                                                1. 2

                                                  I’m not saying that the OP was correct, I’m just saying that:

                                                  /proc/self/exe is just a symlink to the executable

                                                  ,,, is also not completely correct.

                                              2. 3

                                                Thanks for pointing out my mistakes! I just fixed the text.

                                                I made some bad assumptions when I read this comment [1] in from from Docker and failed to validate it. Sorry.

                                                By the way, is it just by bad English or that comment is actually wrong as well?

                                                [1] https://github.com/moby/moby/blob/48c3df015d3118b92032b7bdbf105b5e7617720d/pkg/reexec/command_linux.go#L18

                                                1. 1

                                                  that comment is actually wrong as well?

                                                  I don’t think it’s strictly correct, but for the purpose of the code in question it is accurate. That is, /proc/self/exe points to the executable file that was used to launch “this” process - even if it has moved or been deleted - and this most likely matches the “in memory” image of the program executable; but I don’t believe that’s guaranteed.

                                                  If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes.

                                                  1. 3

                                                    but I don’t believe that’s guaranteed.

                                                    I think it’s guaranteed on local file systems as a consequence of other behavior. I don’t think you can open a file for writing when it’s executing – you should get ETXTBSY when you try to do that. That means that as long as you’re pointing at the original binary, nobody has modified it.

                                                    I don’t think that holds on NFS, though.

                                                    1. 1

                                                      If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes

                                                      Actually, scratch that. You won’t be able to write to the executable since you’ll get ETXTBUSY when you try to open it. So, for pretty much all intents and purposes, the comment is correct.

                                                      1. 1

                                                        Interesting. Thank you for your insights.

                                                        In order to satisfy my curiosity, I created this small program [1] that calls /proc/self/exe infinitely and prints the result of readlink.

                                                        When I run the program and then delete its binary (i.e., the binary that /proc/self/exe points to), the program keeps successfully calling itself. The only difference is that now /proc/self/exe points to /my/path/proc (deleted).

                                                        [1] https://gist.github.com/bertinatto/5769867b5e838a773b38e57d2fd5ce13

                                                  1. 1

                                                    Last I looked, chromium ships with 720 megabytes of third party source, including ffmpeg, libc++, and a bunch of other things. Of course it takes forever to compile.

                                                    1. 6

                                                      A super rough rule of thumb: use getters if you need to perform logic to determine the property. Use setters if you need to maintain a class invariant. Otherwise use direct access.

                                                      1. 8

                                                        Always using methods for access is a type of future proofing since you can never be sure when you may need to add logic around access, and so a method is more tolerant of change over time. The interface changes if you switch from property access to method access in Java.

                                                        I have no strong opinion, though, since I see it as a judgement call.

                                                        1. 8

                                                          My question is how often has the working programmer ever had to do that? I don’t remember ever doing this in all my years of software. Not once have I been able to take advantage of the fake insurance policy that getters and setters provide.

                                                          Well, I have had to do it before. Maybe the author writes more green field stuff than maintaining systems.

                                                          1. 2

                                                            In most cases, you’ve got the source code that uses the object. Just change the code.

                                                            1. 1

                                                              Less-helpful for writing libraries used by several code bases - also something I have done several times. This is why it’s a judgement call to me. You are in a better place to predict how your code will be used than I am.

                                                        1. 13

                                                          Surely, there must be some browser that throttles background activity? :P

                                                          1. 2

                                                            Can you confirm, that firefox on windows does not exhibit that problem? (I don’t know how I could check that myself.)

                                                            1. 9

                                                              Not without digging a bit further.

                                                              What I can confirm is that Firefox doesn’t allow new windows (popups and popunder) below a minimum size of 100 x 100px

                                                              1. 6

                                                                Why does it allow them a all? Even if you want JS to open windows (I don’t), it seems like a mistake to not force it into a background tab. Let alone allowing JS to control placement.

                                                                I’d like to see popups disabled entirely by default.

                                                                1. 3

                                                                  Firefox does block them by default Popups can be useful in the workflow of some web apps and Firefox allows users to whitelist by domain.

                                                                  1. 1

                                                                    From that page:

                                                                    Is the pop-up shown after a mouse click or a key press?

                                                                    And

                                                                    Certain events, such as clicking or pressing a key, can spawn pop-ups regardless of if the pop-up blocker is on. This is intentional, so that Firefox doesn’t block pop-ups that websites need to work.

                                                                    Sketchy websites game that. Just disable popups entirely. Delete the code, and be done with it.

                                                                    1. 1

                                                                      You’re breaking my workflow! For serious, actually, I use a pop up window to present slides driven by the main window.

                                                                  2. 1

                                                                    People are already trained to answer permission requests ‘this site wants to send you notifications allow/deny’. Isn’t it a good time to block opening new windows/tabs/popups via JS by default and prompt the user for a decision?

                                                                    1. 1

                                                                      The sad truth is that people are trained to click “Yes/Allow” for their thing to work. But yes, the popup blocker should disallow most new windows, unless provoked by a true user gesture (modulo bugs, of course).

                                                                      1. 1

                                                                        The sad truth is that people are trained to click “Yes/Allow” for their thing to work.

                                                                        nothing sad about that. What you actually mean is that a majority of people do that, but that doesn’t justify not giving the minority a choice.

                                                              1. 4

                                                                The Plan 9 C compilers are fast. Really fast. I remember compiling kernels served from remote filesystems in ~6 seconds… Does marvels for quick turnaround time…

                                                                1. 5

                                                                  Yes. The whole plan 9 toolchain is a joy to use, and it’s amazing to see how the Plan9front people have kept it up to date and usable, with working SSH clients, wifi drivers, USB 3, hardware virtualization that can run Linux and Openbsd, and the cleanest NVMe driver I’ve ever seen.

                                                                  I actually use the system regularly for hacking on things, while it’s definitely not the most practical choice, I really enjoy it.

                                                                  1. 3

                                                                    Wait up, wifi drivers? I need to set that up on one of my several gazillion laptops posthaste

                                                                    1. 1

                                                                      9front uses openbsd’s wireless firmware, so if your card works on obsd, itll probably work on 9front

                                                                      1. 3

                                                                        It has far fewer drivers, though. You’ll probably have good luck with an older Intel card, but you should check the manual. As with all niche OSes, don’t expect it to work on random hardware out of the box. And as with many niche OSes, older thinkpads are usually a good bet for working machines.

                                                                        1. 2

                                                                          yes good point, now that i am thinking about it, even then the support for wireless is quite bare. when i ran 9front on a thinkpad a couple years ago i think i recall the centrino wireless-n line of cards working well. for anyone interested, here are the docs

                                                                  2. 1

                                                                    What made it (or make it) so fast? Did.it have to leave out some other feature to achieve that? Or was ist just the plan9 source, I remember hearing that it had no ifdefs and other preprocessor instructions, that let it compile so quickly.

                                                                    1. 3

                                                                      the plan9 source was definitely a part of it (include files did not include other files), but the compiler itself also eschewed optimizations in favour of fast code generation. the linker wasn’t that fast, though.

                                                                      Here’s a quote from the original papers that came out in the early nineties:

                                                                      The new compilers compile quickly, load slowly, and produce medium quality object code.

                                                                      All in all the kernel was a few megabytes in size, compiling several hundred thousand lines of code. Comparably less than the core of the Linux kernel at the time, and not counting the many myriads of drivers linux had that Plan 9 didn’t. More here: https://9p.io/sys/doc/compiler.html

                                                                  1. 7

                                                                    The rc file seems like a simple and good concept. Already your manual and docs are more clear than irssi’s.

                                                                    1. 7

                                                                      manual and docs […] more clear than irssi’s

                                                                      That’s a low bar to clear, though.

                                                                      1. 2

                                                                        That is true, but all the more reason to purge irssi.

                                                                        1. 1

                                                                          I dunno, dicking around with funky scripts is half the fun with irssi ;)

                                                                          1. 2

                                                                            fun

                                                                            for certain values of ‘fun’ …

                                                                            1. 2

                                                                              So much hate for an IRC client I’ve been using for almost a decade without a hitch.

                                                                              1. 3

                                                                                Exactly. irssi has flaws – as mentioned, configuration is unnecessarily painful – but I was a satisfied user for over a decade, and if I didn’t end up with the irrational itch to write my own, I’d have continued using it.

                                                                                1. 2

                                                                                  I use it too, I hate the website.

                                                                      1. 4

                                                                        Slightly off topic: The libstd link of Myrddin leads to a 404. In the libraries section on this page misses a trailing slash. And a second one when clicking “Run!” here. The contbuild in the navigation on the left also lacks a trailing slash in the link.

                                                                        Contbuild looks really nice, especially after buildbot became so JavaScript heavy.

                                                                        Even though I never used Myrrddin the implementation of Irc.myr looks really straight forward. It was a pleasure to skim over it. :)

                                                                        1. 2

                                                                          Thanks, should be fixed.

                                                                        1. 6

                                                                          Why not just use a Windows build of Emacs?

                                                                          I haven’t used Windows in years, but when I did, the official Windows build worked great for me. It didn’t (doesn’t?) have an installer, and had to be added to my PATH by hand, but it only took 2 seconds, and was simpler than the process in the article.

                                                                          1. 5

                                                                            Because, as far as I understand, the Windows subsystem for Linux has a separate view of the file system, and accessing files across subsystems is not expected to work. So, if you want to edit WSL files, your editor needs to run under WSL.

                                                                            https://blogs.msdn.microsoft.com/commandline/2016/11/17/do-not-change-linux-files-using-windows-apps-and-tools/

                                                                            1. 1

                                                                              Wow, that’s an unfortunate limitation, but it does explain the need for a WSL based Emacs.

                                                                          1. 32

                                                                            There’s a “troll” flag specifically for comments like this. Please don’t take this responsibility upon yourself. If there’s even a chance that the comment was made in good faith, mods should stay far away from it.

                                                                            1. 10

                                                                              I disagree. It’s too easy for hatred to snowball. If widely hated technology X comes up, comments hating on it will get highly upvoted. Even though I dislike the trend, I am tempted to participate, because often I also don’t like widely hated technology X and I want to vent my frustration too. But venting frustration isn’t actually productive, it just puts me in a sour mood.

                                                                              The question is, do we want to allow communal bitching, or try to foster a more positive environment? I try to fill my life with positivity and happiness, so I leave communities with even a moderate proportion of complaining, like HN and slashdot. Lobsters has traditionally been quite positive and pleasant. For me that’s it’s greatest feature.

                                                                              I recall when I first joined I rarely had a negative reaction to any user comments. But lately I’m seeing more and more negative content. And I admit I’m guilty too, I have lashed out at (what I perceive to be?) negative attitudes, which only fuels the flames of negativity.

                                                                              I’m 100% for negativity moderation. If I’m being an asshole and my comment gets deleted for it, then good riddance. I will appreciate the reminder that It’s better to be kind, thoughtful, and considerate. I will rephrase my comment to be more productive, or let it lie if I never had anything positive to contribute in the first place.

                                                                              Actually, that makes me think of a potential moderation strategy: comment hidden until rephrased without hostility. What do you think @pushcx? Or what about a “request non-hostile / productive rephrasing” flag, separate from upvoting and downvoting. Feedback from peers is preferable to moderation, but I don’t think votes are expressive enough. If someone makes a great point, I want to upvote. But if they’re an asshole, I want to downvote. So instead I do nothing, and the situation doesn’t improve.

                                                                              1. 8

                                                                                If I’m being an asshole and my comment gets deleted for it, then good riddance. I will appreciate the reminder that It’s better to be kind, thoughtful, and considerate.

                                                                                I doubt that you will appreciate it when you will not agree that you’ve been an asshole. Especially in the case of that deleted comment where not only author but also some group of users considered it surprising that it was removed.

                                                                                Since that rules of commenting here are vague at best, moderation should be limited only to comments that are clearly and strongly harmful to the community.

                                                                                1. 5

                                                                                  Since that rules of commenting here are vague at best, moderation should be limited only to comments that are clearly and strongly harmful to the community.

                                                                                  So it’s okay if comments only mildly harmful to the community are allowed to become the norm, and we end up with a shoddy community?

                                                                                  1. 7

                                                                                    The most likely way for lobsters to become shitty is if we start having activist mods. We’re fine without censorship, thank you.

                                                                                    1. 7

                                                                                      I disagree. I agree with what pushcx said in this post:

                                                                                      Communities like Usenet, 4chan, and YouTube with little to no human moderation sink into useless garbage.

                                                                                      Moderators exist to keep discussion civil. If you believe active moderation is inherently fascist, then how do you propose maintaining civility?

                                                                                      1. 5

                                                                                        By being a small invitation based tech community mostly.

                                                                                        1. 5

                                                                                          We’re not. We’ve got 8k+ users now.

                                                                                          1. 6

                                                                                            8k+ users now.

                                                                                            Yes, out of that only 3448 has more than 0 karma, and 725 that have more than 100. I don’t know… this doesn’t look to me like a huge number of active/posting users.

                                                                                            In my opinion this site is still quite small.

                                                                                            1. 2

                                                                                              The example communities get millions of uniques a month, even HN does. I think I regularly recognize the majority of posters in a given thread.

                                                                                          2. 2

                                                                                            Civil is one thing, but banning posts that point out something new would be considered an abomination by the long beards is not fair. It’s a policy that actively benefits anything new, regardless of it’s characteristics.

                                                                                            SystemD, too, had it hard. Why protect Electron?

                                                                                            1. 3

                                                                                              Explain to me how “ugh I hate Electron / systemd” is a novel or interesting idea.

                                                                                              1. 3

                                                                                                Your post is neither novel nor interesting, but I don’t support it being deleted.

                                                                                                1. 3

                                                                                                  But my post is not negative / hateful. I specifically addressed hateful comments of negligible value, a key component of my argument that you’ve blatantly ignored. And I do not recall suggesting censoring criticism of Electron, or systemd, or anything else. Only moderation to encourage civil and productive discourse. I even suggested moderation alternatives to deletion, which you’ve also ignored.

                                                                                            2. 2

                                                                                              False dichotomy. We’re not asking for little to no human moderation, we’re asking for moderation at the behest of the community at large. If the community asks for a lot of moderation, use a lot of moderation.

                                                                                              1. 1

                                                                                                It is a false dichotomy, because I was pointing out the false equivalency of moderation == censorship. Perhaps I was reading into it too much, but the characterization of moderation as censorship implied a desire for no moderation, to me anyway.

                                                                                                1. 1

                                                                                                  That’s not the sense that I got.

                                                                                          3. 1

                                                                                            By ‘moderation’ I was thinking of deletion of comments - sorry if that wasn’t clear.

                                                                                            1. 2

                                                                                              I also suggested hiding until rephrasing, or a specific avenue for feedback on tone. What do you think of those?

                                                                                              And your doubt is unfounded, I do not ever disagree that I have been an asshole. That judgement is not mine to make, as I cannot disagree with how someone else feels. If I believe I haven’t said anything wrong then I instead assume there was a miscommunication. That’s part of why I suggested enforced rephrasing rather than deletion.

                                                                                              And if the group of people is surprised my comment got deleted because they believe I have a right to be an asshole, then I disagree with them.

                                                                                              1. 1

                                                                                                Edit: dupe

                                                                                                1. 0

                                                                                                  I’m not really willing to subscribe to the lowest common denominator definition of asshole. I’d probably just leave if posts I didn’t think were bad were getting deleted regularly (mine or anyone elses).

                                                                                                  I think just downvoting them into the grey realm is plenty.

                                                                                                  1. 2

                                                                                                    Since you have nothing to say about my alternative to deletion, and instead are continuing to fear monger about mods deleting posts regularly when directly presented with an alternative, I will assume that you have no interest in a real discussion and would prefer to complain endlessly.

                                                                                                    1. 2

                                                                                                      downvoting them into the grey realm is plenty

                                                                                                      That is your answer right there - which answers your question to me from few post back.

                                                                                                      To be honest your last post is great example of post devoid of any value. You could have just leave this particular thread but instead decided to insult other user…

                                                                                                      1. 2

                                                                                                        Actually, that answer specifically makes no sense as a reply to my comments. From my first comment:

                                                                                                        It’s too easy for hatred to snowball. If widely hated technology X comes up, comments hating on it will get highly upvoted.

                                                                                                        And the rest of the comment doesn’t track with anything I’ve said either, which I felt was valuable to emphasize as my motivation for not replying further. Perhaps the value was minimal, but I really do not think downvoting is enough and didn’t want to leave it without a response. You’re right though, I was a little rude. I personally consider it rude when puts me on the spot to consider their ideas while flagrantly ignoring mine. Rudeness snowballs easily, and I’ve just fallen into that trap. This would be a good situation for a peer-suggested rephrasing feature, since I could easily rephrase my comment to avoid being rude and provide more value.

                                                                                          4. 5

                                                                                            The question is, do we want to allow communal bitching, or try to foster a more positive environment?

                                                                                            I’d lean strongly towards allowing communal bitching, especially if the alternative is forcing positivity. I gravitate to less positive communities because I want harsher feedback. I regularly make things that simply aren’t very good. If someone doesn’t feel like they can just say that without moderator interference, this harms me. I lose valuable discussion, and I worry if people dislike what I’m building but aren’t telling me because it wouldn’t be seen as “positive”.

                                                                                            Therefore, I see force-fed positivity as an anti-goal. I want a community with carefully thought out positions, where statements are supported, arguments make sense, and people back up what they say. I don’t want them to shy away from telling me that I did something dumb.

                                                                                            1. 8

                                                                                              I think there’s a difference between bitching and being critical. Compare the thing that was deleted (“daily electron hate post”) with [this] in the same thread. The latter comment by @qbit gets the same idea across (‘electron is heavyweight’) but does it in a way that’s comprehensive and informative.

                                                                                              1. 1

                                                                                                I think there’s a difference between bitching and being critical.

                                                                                                I agree. But he comment I was responding to was pushing for ‘negativity moderation’.

                                                                                                For bitching vs being critical – we generally don’t need heavy moderation to enforce it, especially when that moderation has a stated goal of enforcing positivity. Downvotes are mostly sufficient. An account that posts nothing but thowaway comments should probably skip directly to a warning and then a ban.

                                                                                                1. 6

                                                                                                  Choose which reply to your comment you would prefer. One is non-negative, one is negative, both have the same meaning.

                                                                                                  Reply 1: By negativity moderation, I don’t mean enforced positivity in all things. Constructive criticism is valuable, and can certainly be accomplished without negative tones like hostility, superiority, derision, and plain old pointless complaining. “This sucks” style comments aren’t useful criticism, just useless negativity. There is no reason criticism and feedback can’t be provided in a positive way or neutral way. Yes neutral, because neutral is non-negative.

                                                                                                  Reply 2: Have you heard of constructive criticism? I don’t appreciate you reducing the nuance of my argument to the point of stupidity, unless you really are dumb enough to conflate non-negativity with enforced positivity. Is it really too complicated for you to criticize without being an asshole? Cause if you can’t sort out how to convey the same meaning in different tones, go back to your “less positive” communities for those with an inadequate grasp of the English language. If you had actually read my comment properly you would see that I specifically don’t like unproductive communal bitching, like basic “this sucks” comments with no real value.

                                                                                                  I would prefer a community that fosters reply 1 and shuns reply 2. I believe we all have the capacity to choose our tone, thus my idea of enforced or suggested rephrasing rather than outright deletion.

                                                                                                  1. 4

                                                                                                    I concur with this. pyon’s reply to me in another thread is a good example of a “Reply 2”-style post that diminishes my desire to participate in the site.

                                                                                                    1. 2

                                                                                                      Except the question at hand is whether you’d prefer

                                                                                                      Reply 1: Constructive criticism

                                                                                                      Reply 2: [Comment deleted]

                                                                                                      1. 1

                                                                                                        There are other strategies for moderation, as I pointed out in my original comment.

                                                                                                      2. 2

                                                                                                        I prefer interacting with people who are willing/able give me reply 2. It doesn’t leave me guessing, and I’m not good at guessing. I’ve recently been involved in a few of emails with Theo De Raadt, and it’s been rather refreshing to have bad ideas immediately called “bullshit” (in one case, in the same email that was telling me how the work was appreciated.)

                                                                                                        There’s clearly a lot of variation in how people prefer to interact, and your preferences aren’t universal. Edit: And, I don’t think that moderating your preferences is going to have a good effect.

                                                                                                        1. 3

                                                                                                          De Raadt and Linus control large projects that people want to participate in and contribute to. There’s a “carrot” there that makes it possible to overcome the “stick,” for some people, even if it’s distasteful. Lobste.rs is a discussion site. I am here only because I like to talk about tech, and I like to read tech material. It’s a small carrot. If people are assholes here, it makes it less rewarding to participate, and reduces my opinion of the people involved. I would rather people be more circumspect, take a bit longer to consider how another person might receive the message they’re typing, and what it might contribute to the site.

                                                                                                          1. 3

                                                                                                            Okay. Since you prefer a rude style I will reply in a rude style, just for you.

                                                                                                            I have no fucking clue how reply 1 was in any way less clear than reply 2. If anything I think reply 1 is more clear, since my actual reasoning isn’t obscured or diluted by insults. Can you really not tell I think you’re full of shit from reply 1? “Constructive criticism […] can certainly be accomplished without negative tones.” Does the word certainly mean something different to you? Undoubtedly; definitely; surely. “There is no reason criticism and feedback can’t be provided in a positive way or neutral way.” No reason. As in literally zero reasons. When the amount of valid reasons is zero, your reason is not included. I consider your idea that rudeness enhances criticism absolutely incorrect. Ergo I think your idea is bullshit.

                                                                                                            I think it’s utterly ridiculous that you need someone to be rude to jostle your brain into parsing English correctly. And it’s irresponsible to pretend your inadequacy is an acceptable justification for encouraging people to act like assholes. You know full well that vitriol turns people away from communities, and discourages contribution. If you somehow don’t know that, then grow the fuck up and clue in to reality because that’s a pretty fucking basic instinct that most humans develop as children.

                                                                                                            I don’t think that moderating your preferences is going to have a good effect.

                                                                                                            Then consider actually reading my comment, and you may notice my suggestion for user feedback on tone as an alternative to moderation.

                                                                                                            I did not want to be rude, but you quite literally asked me to be rude. I hope it has helped you understand my position, and doesn’t “leave you guessing.”

                                                                                                            1. 1

                                                                                                              Okay. Since you prefer a rude style I will reply in a rude style, just for you.

                                                                                                              Your first paragraph confuses ‘poorly written’ and ‘rude’. Formatting may have helped. The second one, however, is excellent. I appreciate that.

                                                                                                              Can you really not tell I think you’re full of shit from reply 1?

                                                                                                              It does take more effort for me to parse, yes. I’m not sure why you find this surprising, given that in the second message you are crystal clear about that. Then again, you’re talking to a person who has been accused of being “not human”, so… shrug, make of it what you will.

                                                                                                              I think it’s utterly ridiculous that you need someone to be rude to jostle your brain into parsing English correctly.

                                                                                                              And yet, here I am, ridiculous in my inadequacy.

                                                                                                              Then consider actually reading my comment, and you may notice my suggestion for user feedback on tone as an alternative to moderation.

                                                                                                              Which, as I said, removes a lot of dynamic range in the conversation. Hopefully, vitriol is used relatively rarely, but adding a layer of policing is not an improvement, for reasons I already stated. In any case, it seems like your idea of a pleasant community is one that I find unpleasant. I’ve happily wandered away from groups like that in the past, because I just didn’t find anything that interested me the culture.

                                                                                                              Anyways, for now, I’ll just remain happy that you’re not a moderator on this site, and move on. I usually avoid this kind of discussion, and I’m eager to get back to that state.

                                                                                                              1. 2

                                                                                                                I don’t propose to outright ban anything. But after a certain topic has generated a couple of “lets jump on the hate bandwagon” threads, I’d probably start axing comments doing nothing but trying to start another one. If a comment has genuine criticism, that’s different.

                                                                                                                User feedback doesn’t mean banning certain tones either. If the community finds a vitriolic comment acceptable in context, then there’s no problem. That is different from the hate bandwagon situation, and perhaps I didn’t provide adequate distinction.

                                                                                                                Feedback from peers also isn’t moderation, it’s just a suggestion. I’ve read many comments where my reaction is “you’re right, but you’re also a total douchebag so I’m not going to up vote you unless you rephrase.” If there was a simple anonymous mechanism to enable that exchange, I would use it, and the author would have the option of rephrasing or not. From a moderator, enforcement should only be used in more extreme cases, or cases where the comment is purely vitriolic with no useful feedback.

                                                                                                                Another possible mod tool is a private warning. A mod flags a comment, and the next time the flagged author hits reply in the thread it reminds them they have been issued a warning for hostility, and to keep their next comment more civil or it may be deleted / blocked awaiting revision for civility.

                                                                                                                There’s no reason to get totalitarian about this, a little encouragement towards civility can go a long way.

                                                                                                  2. 2

                                                                                                    Where would you draw the line between bitching and commiseration over a commonly acknowledged criticism?

                                                                                                    1. 1

                                                                                                      Hostility and bitterness mostly. “I hate Electron it’s awful” and “Electron causes a lot of problems for me” are pretty different. It’s a hard line to draw precisely but I think certain comments obviously only exist to express hatred.

                                                                                                  3. 2

                                                                                                    We also had a handy thread exploring how to use votes correctly. :)

                                                                                                  1. 8

                                                                                                    mk(1) is an interesting simplification over make. Reducing the capabilities of make makes the recipes in makefiles a lot easier to parse both for machines and humans. Check it out:

                                                                                                    https://9fans.github.io/plan9port/man/man1/mk.html

                                                                                                    http://www.cs.tufts.edu/%7Enr/cs257/archive/andrew-hume/mk.pdf

                                                                                                    1. 4

                                                                                                      I’m sure it’s fine for small projects, but I’m more interested in tools that everybody including big projects can use (e.g. the Linux kernel which uses GNU Make, or Clang which uses CMake now).

                                                                                                      Android used to be built with 250K+ lines of GNU make, including the “GNU Make Standard Library”, which is basically a Lisp library in Make – complete with Peano numbers as far as I remember! Peano numbers in a library are a good sign you might need integers in your language!!!

                                                                                                      People say bash is bloated, and it is. But I can see that there was demand for every feature. It’s not like they just added features nobody wanted.

                                                                                                      Actually I think the problem is that it has too few features for modern tasks, like say proper hash tables / maps.

                                                                                                      Same with Make.

                                                                                                      For example, how do you extract dependencies from C files with mk? That is, the gcc -M feature I mentioned in the post. Is it documented?

                                                                                                      Also, I just wrote a Makefile for all the access.log.*.gz files that I sync from my web host. That requires a bit of metaprogramming – dynamically constructing the rules from the filenames. The “pattern rules” of GNU make aren’t enough in that case.

                                                                                                      1. 4

                                                                                                        mk builds an entire operating system, plan9. I personally find cmake to be trash. I’m not an expert on mk, but perhaps someone can fill the blanks, but I’m pretty sure mk supports generating dependency lists during the build.

                                                                                                        1. 4

                                                                                                          cmake the language is definitely trash, but it’s now the “go to” thing because it has the best support for cross-platform builds (Mac and Windows in particular).

                                                                                                          As far as I understand, plan9 mk has an easier job, because everything is in the same tree. The C compilers themselves are much simpler than Clang / GCC (but I don’t see anyone clamoring to use Plan 9 C compilers either).

                                                                                                          Plan 9 is small and coherent and adheres to a strict style itself. That is great, but I am more interested in tools that can handle a lot of heterogeneity.

                                                                                                          Everything is small and simple until you need 100 or 1000 people to build it, and for better or worse need 1000 people to build some things (e.g. Android, a completely new mobile phone OS).

                                                                                                          So I would say I am more interested in supersets of GNU Make, not subsets. As far as I can tell, mk is basically a subset of Make.

                                                                                                          1. 3

                                                                                                            As far as I understand, plan9 mk has an easier job, because everything is in the same tree. The C compilers themselves are much simpler than Clang / GCC (but I don’t see anyone clamoring to use Plan 9 C compilers either).

                                                                                                            It goes beyond even that: in Plan 9, there are no shared libraries, and a special #pragma lib "foo" that is inevitably mentioned in foo.h tells Plan 9’s compiler what static libraries a given header depends on. This combo means that mk files can be simpler due to more intelligence in the C compiler itself. You can read a bit more in the Plan 9 programming tutorial (scroll down to “As aside on linking,” which I unfortunately cannot link to directly, since it lacks an anchor.)

                                                                                                          2. 2

                                                                                                            When you have a large complex problem, you can either come up with a convention to reduce the scope of the problem, or you can make the tools more complex to span the whole problem.

                                                                                                            If you control the system, the former is almost always a better choice. If you do not, welcome to the complexity treadmill. It’s hard to write a good, clean, minimal system without a strong set of conventions on the right way to do something.

                                                                                                            1. 4

                                                                                                              I think this orthogonal to the point I was making in the post. If you control the system, sure, I agree. You want to design it with careful conventions and not make everything a special case, blowing up your code.

                                                                                                              But the requirement to “control” the system limits what you can do. I’m more interested in systems with heterogeneity, because I argue every big system has heterogeneity. For example, the Android ecosystem, or even the Apple ecosystem. Apple purposely tries to makes things homogeneous, with some success, but the sheer size of the business means that they ship a ton of code.

                                                                                                              I think of shell as a language for dealing with heterogeneity gracefully. You’re gluing together parts that weren’t designed to be glued together.

                                                                                                              I didn’t make this point that strongly in the post, but here it is:

                                                                                                              • Shell didn’t solve the build problem, so Make was added on top. Shell and Make now largely do the same thing – they invoke processes (in parallel) and they have crappy ways of manipulating strings.
                                                                                                              • The Make language isn’t enough to solve the build problem, so GNU bolted Guile Scheme on top. And I argue that users needed this – I have needed it even for one person projects. It’s not people adding features “for fun”.

                                                                                                              So now you have three Turing complete languages in one system. But the cause is NOT implementing too MUCH, it’s implementing too LITTLE. If shell expanded its domain just a tiny bit, then you wouldn’t need Make. And if shell had richer data structures, or even just integers, you wouldn’t need Guile Scheme.

                                                                                                              I made a point in this direction last year:

                                                                                                              http://www.oilshell.org/blog/2016/11/14.html

                                                                                                              Also, I’ve only read Plan 9 papers, and not used it, but I think it still has this problem of “implementing too little causes other people to implement new tools, increasing global complexity”.

                                                                                                              A sibling comment posted this code which is fairly similar to GNU Make:

                                                                                                              OBJS=${EXES:%=%.o}
                                                                                                              < ${OBJS:%.o=%.d}
                                                                                                              

                                                                                                              In POSIX sh, and I’m guessing Plan 9’s shell rc (?), there are slightly different ways of doing those two things. The languages overlap, which is not “minimal”.

                                                                                                              1. 1

                                                                                                                So now you have three Turing complete languages in one system. But the cause is NOT implementing too MUCH, it’s implementing too LITTLE. If shell expanded its domain just a tiny bit, then you wouldn’t need Make. And if shell had richer data structures, or even just integers, you wouldn’t need Guile Scheme.

                                                                                                                I made a point in this direction last year:

                                                                                                                http://www.oilshell.org/blog/2016/11/14.html

                                                                                                                I completely don’t understand your argument. Mine is that make should do what it does best, which is resolving dependencies. If shell can do it, make shouldn’t replicate. I guess, in plan 9, what actually happened is that awk can do it, shell shouldn’t replicate (about that integer thing). BTW, bash supports integers well, zsh even handles floating points.

                                                                                                                About the parameter expansion, ${OBJS:%.o=%.d}, in plan 9, rc doesn’t have the equivalent syntax as bash, so mk is handling it. This supports the basic ideology that no two tools should share common capabilities.

                                                                                                                1. 3

                                                                                                                  Two answers:

                                                                                                                  I think the argument that make and shell are separate tools, that should do what they do best, is a fallacy. At the very least, shell programs and make programs all have to solve all the same string manipulation problems. Shelling out to sed isn’t really an answer.

                                                                                                                  mk and rc might be somewhat orthogonal rather than overlapping like bash and GNU make, but I would argue that this is probably because they don’t have many users. For example, if rc doesn’t have a method to replace extensions on filenames, that’s probably because nobody uses it very much. That’s an extremely common operation in shell scripts that people need. You need it in Makefiles too obviously.


                                                                                                                  Your original comment sold mk as a subset of GNU Make. As I said, I’m more interested in supersets of GNU make than subsets.

                                                                                                                  However, I took another look at the mk paper now, and it’s actually not a subset of make as you implied. They have a pretty nice comparison at the end.

                                                                                                                  • The regex pattern rules / metarules are exactly what I was referring to in terms of needing “metaprogramming”, so that’s a +1. I still think you need Turing-complete metaprogramming, but being able to capture multiple parts of a path and use \1 and \2 definitely helps.
                                                                                                                  • It seems they support out-of-date computations other than timestamps, so +1. GNU make is completely timestamp-based.
                                                                                                                  • It seems to have a more sane syntax for things like .PHONY, which is just a big hack in GNU Make to avoid actually parsing anything…
                                                                                                                  • I don’t quite see the details, but the algorithm for metarules might be better than that of GNU Make, which I think is O(n^2) at least. Certainly I got huge speedups in practice by disabling the built-in pattern rule database in GNU Make.
                                                                                                                  • I think the model of building the entire dependency graph first makes sense, although I have to look more carefully. GNU Make does have a pretty confused execution model as I mentioned.

                                                                                                                  However, there is a big red flag to me in the first extended example:

                                                                                                                  Now if we want to profile prog (which means compiling everything with the -p option), we need only change the first line to CFLAGS=-g -p and recompile all the object files. The easiest way to recompile everything is with mk -a which says to always make every target regardless of time stamps.

                                                                                                                  It would be a lot better if the build tool used CFLAGS as a key to the cache, so you wouldn’t have to manually rebuild. This is how https://bazel.io/ works for example.

                                                                                                                  So in short, it does seem like mk is interesting and better than the average build tool I see on Github, despite being from 1987!!! And it does look like an improvement on GNU make in many ways. However I think the problem is that there’s no incentive for anyone to switch to it. Being “better” is not enough.

                                                                                                                  This relates to what I’m doing with Oil, in that I believe you have to implement “all of” bash to displace it. Programming languages have network effects (and mk and Make are programming languages), which makes them very hard to displace. Even though I believe mk is better, I still wouldn’t use it for a new project. I would take inspiration from it though.

                                                                                                                  Thanks for the pointer, but I would have liked to have been pointed to the things that it does do rather than what it doesn’t do!

                                                                                                          3. 1

                                                                                                            I’m not sure what gcc -M feature you are talking about. This does not seem to be related to any specific features of GNU make. With mk, you can just do

                                                                                                            CFLAGS=-MMD
                                                                                                            EXES=t
                                                                                                            $EXES:
                                                                                                            OBJS=${EXES:%=%.o}
                                                                                                            < ${OBJS:%.o=%.d}
                                                                                                            %.o:    %.c
                                                                                                                    cc $CFLAGS -c $stem.c
                                                                                                            %:      %.o
                                                                                                                    cc -o $stem $stem.o
                                                                                                            
                                                                                                            1. 1

                                                                                                              By the way, there is a good thing about giving the control back to shell and existing tools rather than being all capable. The include line (the one starts with <) does not really work for multiple targets. A more robust version would be

                                                                                                              <|cat ${OBJS:%.o=%.d} 2>/dev/null || true
                                                                                                              

                                                                                                              You can also throw in that long sed command here if you need it.

                                                                                                              1. 2

                                                                                                                Yeah there are tons of gotchas like this in GNU Make – hence my point about “the obvious thing is wrong”.

                                                                                                                Make gives you virtually no guidance in writing correct build files (i.e. avoiding underspecifying or overspecifying dependencies.) Your build could work fine serially, but be silently incorrect in parallel – the worst kind of bug.

                                                                                                                To me it looks like mk doesn’t address that, but I could be wrong agian. I read the paper a long time ago but haven’t used it.

                                                                                                                Does mk at least solve the problem of not relying on shell? Does it shell out on every line to rc or whatever?

                                                                                                                Also – do you need the sed command or not? I thought you did, but the point of my post was that people told me you don’t (and I agreed after testing it out). It’s documented one way in the GNU make manual, but people do it the other way. So it would be nice if mk documented the correct way to do it.

                                                                                                              2. 1

                                                                                                                The -MMD is what I was talking about. So it looks like mk supports automatic prerequisites in the same way. I guess the only mechanism you need is to have an “include”, which mk does with < apparently.

                                                                                                                That was just one example, and it looks like mk might be sufficient there. One other major feature I want is expressing build variants (i.e. “metaprogramming”).

                                                                                                                For example, it should be easy to make debug/release builds in different trees, and ASAN builds, and code coverage builds, and PGO builds. This is pretty awkward with GNU Make and most build tools I know of.

                                                                                                                I thought I wrote about this here but apparently not:

                                                                                                                http://www.oilshell.org/blog/2017/05/31.html

                                                                                                                One of my pet peeves is recompiling the whole project if I change CFLAGS. I think it discourages the use of important performance and security tools like the ones mentioned above. The compiler has a whole host of things to help you write code, but I feel like these are bottlenecked behind inexpressive build systems which result from inexpressive build languages.