1. 2

    Great article – it is really positive to read about somebody taking steps to take care of themselves. It is unfortunate that he had to quit his job (no sabbatical option) to do so.

    1. 3

      And sell a “large chunk” of his equity to do so. Burns himself out to make the equity worth something, sells it to recover. Painful.

    1. 39

      Perhaps build systems should not rely on URLs pointing to the same thing to do a build? I don’t see Github as being at fault here, it was not designed to provide deterministic build dependencies.

      1. 13

        Right, GitHub isn’t a dependency management system. Meanwhile, Git provides very few guarantees regarding preserving history in a repository. If you are going to build a dependency management system on top of GitHub, at the very least use commit hashes or tags explicitly to pin the artifacts you’re pulling. It won’t solve the problem of them being deleted, but at least you’ll know that something changed from under you. Also, you really should have a local mirror of artifacts that you control for any serious development.

        1. 6

          I think the Go build system issue is a secondary concern.

          This same problem would impact existing git checkouts just as much, no? If a user and a repository disappear, and someone had a working checkout from said repository of master:HEAD, they could “silently” recreate the account and reconstruct the repository with the master branch from their checkout… then do whatever they want with the code moving forward. A user doing a git pull to fetch the latest master, may never notice anything changed.

          This seems like a non-imaginary problem to me.

          1. 11

            I sign my git commits with my GPG key, if you trust my GPG key and verify it before using the code you pulled - that would save you from using code from a party you do not trust.

            I think the trend of tools pulling code directly from Github at build time is the problem. Vendor your build dependencies, verify signatures etc. This specific issue should not be blamed directly on Github alone.

            1. 3

              Doesn’t that assume that the GitHub repository owner is also the (only) committer? It’s unlikely that I will be in a position to trust (except blindly) the GPG key of every committer to a reasonably large project.

              If I successfully path-squat a well-known GitHub URL, I can put the original Git repo there, complete with GPG-signed commits by the original authors, but it only takes a single additional commit (which I could also GPG-sign, of course) by the attacker (me) to introduce a backdoor. Does anyone really check that there are no new committers every time they pull changes?

              1. 3

                Tags can be GPG signed. This proves all that all commits before the tag is what the person signed. That way you only need to check the people assigned to signing the tagged releases.

          2. [Comment removed by author]

            1. 2

              Seriously, if only GitHub would get their act together and switch to https, this whole issue wouldn’t have happened!

              1. 4

                I must have written this post drunk.

          1. 2

            A talk that I haven’t seen mentioned here yet, that I really enjoyed: Creativity in Management – John Cleese

            1. -1

              OK. Fork it. Build your own community/dev processes/culture around the way you feel is best, and let the market decide. Don’t do the public talk circuit and whinge about how Linus is a meanie.

              1. 9

                OK. Fork it. Build your own community/dev processes/culture around the way you feel is best, and let the market decide. Don’t do the public talk circuit and whinge about how Linus is a meanie.

                I find this kind of response to criticism, to be generally counter productive. It is often simply being dismissive. I mean.. shouldn’t forking an entire community be the /last/ option, not the /first/ one?

                1. 6

                  Build your own community/dev processes/culture around the way you feel is best, and let the market decide.

                  I won’t repeat trousers’ comment that forking is the last resort (which I agree on).

                  I will add that a community where someone can talk about problems openly is a healthy robust community.

                  Don’t do the public talk circuit

                  This talk was at linux.conf.au, which is a significant community open source conference. A bunch of kernel developers & maintainers come every year. Linus himself has come a few times. Standing up in front of your peers (or former peers) to explain problems you see is not “doing the public talk circuit”.

                  Linus is a meanie

                  One point made in the talk is how the “angry Linus” meme (particularly the way his abusive LKML outbursts are covered so widely) is a barrier to talking constructively about disfunction in the rest of the kernel developer/maintainership (which is what the majority of the talk was about).

                1. 2

                  Perhaps it would be more useful to ask people not to derail technical posts with meta-discussion about communication style and behavior. It’s a regular occurrence in this community, and not restricted to mailing list threads.

                  1. 8

                    Many of these submitted mailing list threads aren’t really submitted for their technical content in the first place, though— they’re explicitly submitted because they were a flamewar and people like to gawk at flamewars, so that’s kind of on-topic to discuss imo. The only particularly interesting thing about the recent Torvalds submission, for example, is the flaming. Presumably that’s why the submitter chose to include an all-caps quote, “COMPLETE AND UTTER GARBAGE” in the submission title, rather than highlighting any technical content. I’m going to go out on a limb and predict that if it had a technical title instead of a flamewar title, it wouldn’t have gotten the attention here that it did. (The little technical content the linked post has turns out further down the thread to not even be correct.)

                    At the very least, when people are linking gawk-at-the-flamewar type mailing list posts, can I suggest tagging them with the rant tag?

                    1. 3

                      The only particularly interesting thing about the recent Torvalds submission, for example, is the flaming.

                      He accuses Intel of planning to not to fix the specter bug, as in they want to provide a workaround off by default since it would impact their performance metrics and shifting the responsibility to OS vendors. That’s far more interesting than flaming and worth the submission in itself.

                      So the IBRS garbage implies that Intel is not planning on doing the right thing for the indirect branch speculation.

                      It’s not “weird” at all. It’s very much part of the whole “this is complete garbage” issue.

                      The whole IBRS_ALL feature to me very clearly says “Intel is not serious about this, we’ll have a ugly hack that will be so expensive that we don’t want to enable it by default, because that would look bad in benchmarks”.

                      So instead they try to push the garbage down to us. And they are doing it entirely wrong, even from a technical standpoint.

                      source: http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04628.html

                      1. 5

                        http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04630.html http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04637.html

                        The next 2 emails show that Linus has misread the patch.

                        You’re looking at IBRS usage, not IBPB. They are different things.

                        Yes, the one you’re looking at really is trying to protect the kernel, and you’re right that it’s largely redundant with retpoline. (Assuming we can live with the implications on Skylake, as I said.)

                        (I pointed that out in the lobste.rs thread, and that’s kind of the thing I was annoyed about)

                        1. 3

                          FWIW, if you look at the second email you linked…

                          Ehh. Odd intel naming detail.
                          If you look at this series, it very much does that kernel entry/exit stuff. It was patch 10/10, iirc. In fact, the patch I was replying to was explicitly setting that garbage up.
                          And I really don’t want to see these garbage patches just mindlessly sent around.

                          Linus seems to be claiming that he didn’t misread the patch.

                  1. 1

                    When you think about it, whichever character is going to be used as delimiter it’s probably also going to be allowed in the filename (anything other than \n or \0). So I’m not sure what the solution to this would be.

                    1. 6

                      Then put the filename alone on the first line, or put the file name last so that you can read in the numeric parameters then the rest of the line is the filename. Right?

                      1. 2

                        Seems like the simplest solution. And it’s not like files in /proc don’t use multilines already. I guess it’s made that way for legacy support. However, as mort mentioned you can also have \n in a filename.

                      2. 5

                        Files can actually contain newlines. Try for example touch "$(printf "hello\nworld")".

                        It would probably work to use a slash as a separator though, unless the executable name might be a path.

                        EDIT: added quotes around $(printf "hello\nworld")

                        1. 1

                          touch $(printf “hello\nworld”).

                          This creates two files where I’m testing. But it seems like I’m able to do it without the printf.

                          1. 3

                            Sorry, I should’ve written touch "$(printf "hello\nworld")".

                            If you just run ls, it will show you 'hello'$'\n''world', but if you redirect pipe ls (for example to less), it will show up on two separate lines.

                        2. 4

                          Or use a “safer” format like netstring, tnetstrings, or maybe even Bencode.

                          1. 3

                            Or expose structs over sysctls and ioctls instead of making these damn virtual filesystems…

                            1. 2

                              I’ve been testing yesterday getting those information through netlinks. There’s a kernel configuration called CONFIG_TASKSTATS (check if it’s enabled in your kernel config first /boo/config* or /proc/config.gz).
                              The documentation can be found here: https://www.kernel.org/doc/Documentation/accounting/taskstats.txt There are a bunch of C headers to be able to access the features and there are even Go and Python libraries. I’ve been testing with the Python one, gnlpy (https://www.pydoc.io/pypi/gnlpy-0.1.2/autoapi/taskstats/index.html).
                              However I’ve been running into trouble with the permission part, the capabilities(7). This is something that I haven’t found that much documentation on. This rfc http://www.faqs.org/rfcs/rfc3549.html and this manpage https://linux.die.net/man/7/netlink say that users need the capability cap_net_admin but it’s not doing it for me.

                              5.  Security Considerations
                              
                                 Netlink lives in a trusted environment of a single host separated by
                                 kernel and user space.  Linux capabilities ensure that only someone
                                 with CAP_NET_ADMIN capability (typically, the root user) is allowed
                                 to open sockets.
                              

                              I’ve been trying sudo setcap 'cap_net_admin=p cap_net_admin+i cap_net_admin+e' t.py but it still doesn’t execute as a normal user. But it works perfectly as root.

                              Maybe someone here has more info on the topic.

                              EDIT: As 1amzave, copying the python interpreter and assigning the capabilities on it works fine.

                              1. 1

                                I’m gonna hazard a guess that your setcap not being effective might have something to do with it being interpreted (via a shebang line) rather than a directly-executed binary. Maybe create a copy of your python interpreter (presumably you don’t want to blindly grant CAP_NET_ADMIN to all python code), setcap that, and change your shebang line to use it instead.

                                1. 1

                                  You’re right, that was the issue. Copying the python interpreter in a home directory and setting the capabilities on it did the trick. The python script itself doesn’t need capabilities.

                                  Overall, I think netlink is great but unlike procfs it’s not that easily accessible.

                          2. 1

                            ASCII actually has field delimiter characters, which are very rarely used – it’s a shame, because it would make parsing trivial in cases like that.

                            1. 5

                              Not really, because file names can contain those field delimiter characters.

                              It would’ve been nice if there were stricter rules about what charcaters are allowed in a file name. When would you ever want a newline, or field delimiter, or carriage return, or BEL, in a file name?

                              1. 2

                                BEL, in a file name

                                When I want to play a small prank by making ls in a given directory make the terminal beep. :)

                              2. 4

                                Field delimiters are also valid file names…

                            1. 6

                              TL/DR: Similar to GRPC, but drops requirement for HTTP2 which can lead to a ton of problems when combined with AWS ELBs. This is pretty cool.

                              1. 3

                                I am disappointingly used to libraries like this that try to focus on just the simple parts being underpowered, but I’m with you: this is clean and surprisingly powerful, without actually sacrificing much at all. I may use this in anger as early as tomorrow.

                                1. 2

                                  This looks incredibly cool. I’ll be looking for additional language support for sure.

                                1. 1

                                  A decent list. However, on a couple of items… I have had different experiences apparently. Example — I will never work at a place that requires pair programming. Good experiences with code walkthroughs though.

                                  1. 4

                                    Some nice research and supposition there. If correct… yikes.

                                      1. 3

                                        here

                                        He is so “crazy” that one of his former colleagues has a totem that they use to mock him in his absence? Fascinating.

                                        1. 4

                                          I would just keep in mind that Michael is a member of this community when making comments like this.

                                          1. 2

                                            I think being skeptical is fine, and perhaps even warranted. However, the top level link /seems/ like a fairly reasonable read to me. Judge it on its content.

                                            1. 5

                                              I think the problem is he speaks pretty authoritatively despite his expertise being based on just his experiences, or his perception of his experiences. It sounds good, but a lot of things sound good and are only occasionally true, not always true.

                                              I used to think he was just idiosyncratic til I had an experience that contradicted his claims, and then he just said “wait til you enter the real world.” I’m actually a few years older than him I believe. He’s incapable of imagining that things may be different. Even if he were right, it’s a very rigid view that doesn’t account for contrary evidence. I’m wary of trying to learn anything from people like that.

                                            2. 2

                                              He showed himself to be pretty out there at Google, when he rage-quit with a particularly nutty letter to the entire company after not getting a promotion. Lots of bits of that letter were memes when I left Google (“I have T7-9 vision!”).

                                              1. 2

                                                He showed himself to be pretty out there at Google, when he rage-quit with a particularly nutty letter to the entire company after not getting a promotion.

                                                It wasn’t about not getting a promotion. I was marked down in “Perf” for speaking up about an impending product failure. (Naively, I thought that pointing out the problem would be enough to get it fixed. It was obvious to me what was about to go wrong– and I was later proven right– but I lacked insight into how to convince anyone.) I found out years later that I was put on a suspected unionist list. Needless to say, the whole experience was traumatic. There’s a lot that I haven’t talked about.

                                                The mailing list activity… I’m embarrassed by that. I did not handle the stress well.

                                                Lots of bits of that letter were memes when I left Google (“I have T7-9 vision!”).

                                                Isn’t it a sign of success, if people are talking about your mistakes several years later?

                                              2. 1

                                                Personally I think Michael O Church is a genius but I’m keenly aware that there’s a fine line between genius and madness. /u/churchomichael is not Michael O Church but seems to be another very intelligent writer but without the anger and national and international politics interest.

                                                1. 1

                                                  doing some digging he seems….. crazy.

                                                  I’ve had a lot of difficult experiences, some related to the political exposure that comes from being outspoken in a treacherous industry. I’ve needed treatment for some of the after-effects.

                                                  Like, he got banned from Hacker News, and also Wikipedia.

                                                  And Quora, too! Wikipedia I actually deserved; that was 2006 and I was being a jerk. The Quora ban was specifically requested by Y Combinator after they bought it.

                                                  He just seems to spend an insane amount of time writing ranty comments/articles/etc online and not much else.

                                                  It’s not that much of my time.

                                                  See /u/churchomichael

                                                  That’s not me. I’m as surprised as you are that someone would name his account in homage to me. There are also Reddit accounts (and even a subreddit!) that exist to mock me.

                                                  Dude just seems to want to complain.

                                                  No, I’d like to fix things, but the odds of that are very, very poor.

                                                  He has 45 suspected sockpuppet accounts on Wikipedia

                                                  Yeah, most of those accounts don’t exist. That’s a hit piece. I’m embarrassed by some of what I did on Wikipedia in 2003-6, but I never had 45 alternate accounts, though I did use so-called “role accounts” back when it was accepted.

                                                1. 2

                                                  Maybe take a look at Perkeep (né Camlistore)?

                                                  1. 1

                                                    I’d love perkeep to support that, but it definitely doesn’t right now. Wouldn’t be that hard to add though…

                                                  1. 1

                                                    Anyone have ideas why FreeBSD did so poorly on the read intensive portion of the test?

                                                    I’m wondering if the zfs ARC was fighting over memory with PostgreSQL. Maybe some additional ARC memory limit tuning (eg. vfs.zfs.arc_max) would have helped? Maybe additionally setting primarycache=metadata too.

                                                    1. 1

                                                      The fact that its ZFS filesystem was configured with lz4 compression while none of the others had any compression at all might have something to do with it. Decompression is done per-block, and each block was configured to be 8K bytes, and the more concurrent requests are made to the filesystem, the higher the chances are that different blocks might be requested, the more data the filesystem actually has to decompress before actually having a reply, even after fetching.

                                                    1. 2

                                                      Sadly they didn’t try DragonFly BSD. Those people have done quite a bit of work in that area and the last benchmark on their website is over 5 years old.

                                                      1. 1

                                                        Agreed. I’m (just a nobody/user) hopeful that in a few more releases, presumably after HAMMER2 gets more dev time, the benchmarks will be re-done.

                                                      1. 2

                                                        Seems like an interesting project.

                                                        I would probably reach for wget (can read an a file of urls to fetch) or a combination of curl and xargs (or gnu parallel), before trying a bespoke tool like this though. That said, the X-Cache-Status statistics are neat, if you need that.

                                                        1. 2

                                                          That’s what I thought. When looping through a file with a few hundred thousand entries with bash/ curl, I had a throughput of ~16 requests/second, while cache_warmer easily was able to do >500 req/s.

                                                          Thanks for the hint, I should probably add that to the post.

                                                          1. 1

                                                            Indeed, looping via bash would be slow due to not reusing the connection. With a carefully crafted xargs, you should be able to get multiple urls on the same line (eg. curl url1 url2 url3...). Then curl /should/ reuse a connection in that case. If curl had a ‘read urls from a file’ parameter, it would be quite a bit easier to script, but alas it currently does not.

                                                        1. 4

                                                          FWIW you can use keepalive with Net::HTTP, you just have to set the header. Ruby just doesn’t keep a global cache of connections, it’s meant to be a low-level library after all.

                                                          I think all HTTP libraries suck ass, in that they require pretty detailed knowledge to use 100% properly and usually require reading the source code. I also don’t think this is avoidable, HTTP use varies so wildly in practice that sensible defaults aren’t necessarily even sensible for all applications of the same archetype.

                                                          1. 1

                                                            I think all HTTP libraries suck ass, in that they require pretty detailed knowledge to use 100% properly and usually require reading the source code.

                                                            In general, I agree. I do find the python requests library pretty usable though.

                                                            1. 1

                                                              Yes it’s great for the common use case! But it isn’t perfect for every use case. I bet I’ve done something in the last 3 days that would be annoying to do in requests. In fairness, I also do a lot of weird stuff.

                                                          1. 17

                                                            If only json had allowed trailing commas in lists and maps.

                                                            1. 9

                                                              And /* comments! */

                                                              1. 3

                                                                And 0x... hex notation…

                                                                1. 3

                                                                  Please no. If you want structured configs, use yaml. JSON is not supposed to contain junk, it’s a wire format.

                                                                  1. 4

                                                                    But YAML is an incredibly complex and truth be told, rather surprising format. Every time I get it, I convert it to JSON and go on with my life. The tooling and support for JSON is a lot better, I think YAMLs place is on the sidelines of history.

                                                                    1. 4

                                                                      it’s a wire format

                                                                      If it’s a wire format not designed to be easily read by humans, why use a textual representation instead of binary?

                                                                      If it’s a wire format designed to be easily read by humans, why not add convenience for said humans?

                                                                      1. 1

                                                                        Things don’t have to be black and white, and they don’t even have to be specifically designed to be something. I can’t know what Douglas Crockford was thinking when he proposed JSON, but the fact is that since then it did become popular as a data interchange format. It means it was good enough and better than the alternatives at the time. And is still has its niche despite a wide choice of alternatives along the spectrum.

                                                                        What I’m saying is that adding comments is not essential a sure-fire way to make it better. It’s a trade-off, with a glaring disadvantage of being backwards incompatible. Which warrants my “please no”.

                                                                    2. 1

                                                                      http://hjson.org/ is handy for human-edited config files.

                                                                      1. 1
                                                                      2. 5

                                                                        The solutions exist!

                                                                        https://github.com/json5/json5

                                                                        I don’t know why it’s not more popular, especially among go people.

                                                                        There is also http://json-schema.org/

                                                                        1. 3

                                                                          I had to do a bunch of message validation in a node.js app a while ago. Although as Tim Bray says the spec’s pretty impenetrable and the various libraries inconsistent, once I’d got my head round JSON Schema and settled on ajv as a validator, it really helped out. Super easy to dynamically generate per message-type handler functions from the schema.

                                                                          1. 2

                                                                            One rather serious problem with json5 is its lack of unicode.

                                                                          2. 3

                                                                            I think this only show that JSON has chosen tradeoff that make it more geared to be edited by software, but has the advantage of being human editable/readable for debugging. JSON as config is not appropriate. There is so many more appropriate format (toml, yaml or even ini come to mind), why would you pick the one that doesn’t allows comments and nice sugar such as trailing commas or multiline string. I like how kubernetes does use YAML as its configuration files, but seems to work internally with JSON.

                                                                            1. 8

                                                                              IMO YAML is not human-friendly, being whitespace-sensitive. TOML isn’t great for nesting entries.

                                                                              Sad that JSON made an effort to be human-friendly but missed that last 5% that everyone wants. Now we have a dozen JSON supersets which add varying levels of complexity on top.

                                                                              1. 11

                                                                                “anything whitespace sensitive is not human friendly” is a pretty dubious claim

                                                                                1. 5

                                                                                  Solution: XML.

                                                                                  Not even being ironic here. It has everything you’d want.

                                                                                  1. 5

                                                                                    And a metric ton of stuff you do not want! (Not to mention…what humans find XML friendly?)

                                                                                    This endless cycle of reinvention of S-expressions with slightly different syntax depresses me. (And yeah, I did it too.)

                                                                                    1. -5

                                                                                      Triggered.

                                                                                      1. 13

                                                                                        Keep this shit off lobsters.

                                                                              1. 1

                                                                                It will be interesting to see where Apple goes with this. Will they stick with FaceID and /try/ to improve the tech further, or will they go back to fingerprint readers now that synaptics has announced functional in-display readers, or some combination?

                                                                                1. 1

                                                                                  Based on their statements so far I am sure they will stick with Face ID. It also works pretty well in practice.

                                                                                  My wife has an iPhone X and while she wasn’t too fond of Face ID when she got her phone, it hasn’t really given her any problems since she stopped consciously using it at all times and light conditions (alas no glasses, scarves or whatnot). It is not better for all use cases since it was easier before to press on screen button to see received messages or time, but I expect she will get used to this eventually too.

                                                                                1. 5

                                                                                  I think the shift from TCP to UDP, with libraries providing any desired TCP-like semantics on both ends, instead of baking them into the protocol, seems like the way to go these days. Historically, game netcode often did this anyway to improve performance.

                                                                                  Hopefully this results in a reduction of “meddlesome middleboxes”, and helps prevent future ossification the article speaks about. It seems a bit unfortunate that IP isn’t the layer this is happening at, but so many networks filter any non TCP/UDP IP packets these days (aside from a few VPN specific ones), that building future protocols atop UDP seems to me to make the most sense. Otherwise something like SCTP or DCCP might have been preferable.

                                                                                  1. 17

                                                                                    In OpenBSD there is a strict requirement that base builds base.

                                                                                    This is actually a pretty reasonable requirement.
                                                                                    I’ve never thought about it before, but probably to challenge C on real operating systems development a safe programming language should be cheap and fast to compile. To me, LLVM is a huge downside of Rust (and many other languages).

                                                                                    1. 9

                                                                                      openbsd base is built with clang now, which uses llvm. But yeah, llvm is not fast.

                                                                                      1. 4

                                                                                        openbsd base is built with clang now…

                                                                                        I believe that is only currently for the x86 & amd64 platforms.

                                                                                        1. 2

                                                                                          Is (modern) gcc THAT much faster to compile itself?

                                                                                          1. 5

                                                                                            I’ve been playing a little with rumprun-netbsd with a self-compiled, for which I needed to build gcc and some build tooling, on top of that I had to compile rustc. Yes, compiling gcc is definitely faster.

                                                                                        2. 7

                                                                                          To me, LLVM is a huge downside of Rust (and many other languages).

                                                                                          We are aware of that and there’s plans to make LLVM just one of the backends at some point.

                                                                                          https://github.com/stoklund/cretonne is a beginning of what could be a fast code-generator. Obviously, proper code generation and optimisation is a huge task.

                                                                                          It’s annoying currently, but, TBQH, mostly constrained by the number of hands available to work on the problem.

                                                                                          1. 6

                                                                                            Hehe, if they want speed try qbe https://c9x.me/compile/ . :P

                                                                                            1. 3

                                                                                              I wouldn’t be surprised if something like that is used, but Rust needs some optimisations very much, so all these things need to be checked.

                                                                                              Also, I should add that Rust does encode a lot of information beneficial to the code generator (such as full aliasing information), so having something that doesn’t assume the input language to be c-like has a huge benefit. LLVM for example suffers from that problem.

                                                                                              Also, an interface for that must be written first :).

                                                                                              1. 1

                                                                                                I was going to suggest qbe myself… ;-)

                                                                                            2. 4

                                                                                              but probably to challenge C on real operating systems development a safe programming language should be cheap and fast to compile.

                                                                                              The Wirth-style systems languages always followed that pattern. The compiler wasn’t acceptable until it could compile itself lightening-fast. The early versions of Pascal got ported to over 70 architectures back when machines were really diverse. I haven’t seen any indication Theo et al tried to adopt that stuff. So, I’m calling the compile-time argument bogus since they wouldn’t switch anyway.

                                                                                              1. 10

                                                                                                So instead of rewrite everything in rust it should have been rewrite everything in pascal? When does it end? FWIW, Theo at point had very high praise for the plan9 C compiler precisely because it was so fast, but there used to be licensing issues and I’m not sure how well it would work with today’s source tree.

                                                                                                1. 4

                                                                                                  You’d have asked the question, “So, we should keep fighting with these same code-level issues with C for everything we write? When does it end?” Then, you would’ve just started writing Pascal with the other stuff ported later over time. Alternatively, a C-like language without its problems plus some benefits of alternatives. You wouldn’t have to rewrite anything since you’d still be benefiting from it. If the compilers didn’t catch up, you’d make it output C. You’d still be writing that stuff now with the mitigations and such just being extras because your code prevents most problems. Then, you’d be looking to add benefits of tech like Eiffel Method (esp Design-by-Contract), SPARK/Frama-C, or recently Rust’s temporal/concurrency safety. Just like Ada and Eiffel camps did over time upgrading their languages for improved safety with backward compatibility.

                                                                                                  Or did you think the Ada and Eiffel people have to rewrite their stuff in a new language every 5-10 years to avoid common vulnerabilities or problems maintaining large systems that seem to be C-specific?

                                                                                                2. 1

                                                                                                  Edit OK, I now see you’re continuing on your comment here:

                                                                                                  https://lobste.rs/s/4cf21p/re_integrating_safe_languages_into#c_4ch7ug

                                                                                                  The threading on this site takes a bit getting used too…

                                                                                                  Previous version for transparency:

                                                                                                  OpenBSD supports 12 hardware platforms. Or am I missing something from your statement below?

                                                                                                  The early versions of Pascal got ported to over 70 architectures back when machines were really diverse. I haven’t seen any indication Theo et al tried to adopt that stuff.

                                                                                                  1. 3

                                                                                                    Portability requiring C was one of the old counters. In this case, Rust was dismissed on portability. I always point out the system languages more easily ported than C or already pretty portable got no adoption from them. Plus, they can use one that compiles to C if it’s that important.

                                                                                                    Yet, even if Rust does compile to those architectures, he’ll just gripe about something else instead of using it.

                                                                                                    Note: Ill respond to Ted’s bigger comments later today or tonighy when Im off work. This one was quick.

                                                                                              1. 2

                                                                                                The password discussion part bothered me a little bit, as he didn’t talk about password storage (hash type) impacting crackability, yet /did/ say “takes 3 days to break”. Then provided a passphrase that he said would take about “550 years to break”, but no mention that dictionary attack resistance is important. I assume he kept it at a high level, just to move the talk along though, and not get fixated on it.

                                                                                                The other part that jumped out at me a bit was his recommendation to “avoid YAGNI”. I found that particular element a bit odd, but perhaps he was speaking about /perceived YAGNI/ that someone lacking domain knowledge may seek to avoid, and /real/ avoidable complexity due to adding unnecessary features/flexibility.

                                                                                                Otherwise a pretty good talk.

                                                                                                1. 4

                                                                                                  The recommendation about rejecting YANGI is that people with domain knowledge know what will be needed and therefore design their systems with those things. The idea behind YANGI is precisely the rejection of the idea that you know what you will need ahead of time. I think of YANGI as a sort of anti-intellectualism which rejects domain knowledge and its value in building systems.

                                                                                                  1. 2

                                                                                                    Hmm. Then perhaps I have been lucky enough to predominantly encounter a simplified/distilled/corrupted version of YAGNI, and not its “higher form”. I have generally seen it invoked when a developer is adding what appears to be unnecessary flexibility or modularity to a component long before such a thing would be needed (if ever). More of a reminder to “start small”.

                                                                                                    If a domain expert thought a feature was necessary, or could for-see that complexity was indeed relevant in the future, then of course you would want to bake that in as soon as possible, so you can have a more cohesive design. With your description, it does now seem like this is more what the presenter was aiming for.