1. 4

    I’ve heard that CMake makes complicated builds more easy to manage, but this particular tutorial doesn’t really show that. In fact, it’s actually much less “work” to write a simple Makefile in this case. As such, I left the tutorial with no reason to care about CMake, and instead of a “woah! you knocked my socks off!”, my reaction was, “so what?” But! I’ve been using make for years. For someone new to everything, I think this shows that CMake is easier to reason about. It just wasn’t enough to sway me from my more familiar make.

    Criticism aside, I really liked the style of the tutorial, and how the repo is all inclusive. I hope to see more tutorials in the future adopt this style (instead of the typical blog post that isn’t all encompassing without linking out to somewhere else, etc).

    1. 3

      As someone who’s used cmake professionally, I agree! It’s a bit simplistic for the basic use-case, with the exception of showing the link capabilities. It also doesn’t mention the other main advantage: once you’ve written the CMakeLists file, you can take it to other systems and (usually)get a clean make system: linux, *BSD, OSX, even Windows(by generating Visual Studio solution files, or Cygwin).

      This is also timely: I’ve been keeping a tool/library evaluation repo for a personal project, using cmake to make sure they all work. Would people be interested in this repo if I added docs similar to this tutorial?

      1. 1

        This is also timely: I’ve been keeping a tool/library evaluation repo for a personal project, using cmake to make sure they all work. Would people be interested in this repo if I added docs similar to this tutorial?

        Yes, absolutely!

        1. 1

          Ok, I’ll clean it out and see if I can get it up in the next few days.

          (Also, I’d like to echo apg’s compliments on the style of the tutorial: it is a good introduction for people totally unfamiliar with CMake)

      2. 1

        As such, I left the tutorial with no reason to care about CMake, and instead of a “woah! you knocked my socks off!”, my reaction was, “so what?” But! I’ve been using make for years. For someone new to everything, I think this shows that CMake is easier to reason about. It just wasn’t enough to sway me from my more familiar make.

        Yup you are right. My intention was not to convince anyone to use a CMake instead of plain Makefile (or any other tool x), but to helps them to quickly get up and running with CMake if they want to use CMake.

        Criticism aside, I really liked the style of the tutorial, and how the repo is all inclusive.

        Thanks!

      1. 10

        I’ve been building a C testing framework for work and heard about Snow on Lobsters, so I’m planning to peruse it’s features for inspiration. The one I’m building isn’t as macro-heavy/macro-driven but I think there are a number of advantages to leveraging macros so I want to see what I can add.

        1. 5

          You should have a look at greatest, which has worked out great for me in the past. I don’t do a lot of C, but dropped my own little testing setup for greatest, and haven’t looked back.

          1. 2

            I’ll check it out, thanks for the link. At a glance, my framework does look similar.

            Probably worth mentioning, I am sort of targeting this at folks that develop software using the traditional full cycle SDLC and have to live through that cycle many many times. As a result, I also have a goal to formally support requirements engineering. Basically what that means is that as a test engineer writes a test for a developer to implement against, they can optionally map it to either a requirement (by ID), a type of requirement (functional, non-functional, performance, etc), or a set of requirements (multiple IDs). On a very large project with many moving parts, support for this in a test tool can be invaluable.

            The nice side benefit of this is that if you’re using a tool like Git, you can scroll through major points in the Git history and clearly see the progression of development not just by what tests are passing, but also by how many of the requirements solicited from the customer/stakeholder are satisfied. Eventually, I’ll support generating metrics from the tests in a common business/professional format (such as an Excel spreadsheet, so managers can create visualizations and whatnot).

            I think it’ll be useful for developers because they don’t just have to point at a results table and say “I’m passing all the tests”, they can point at a results table and say “I’m passing all the tests and here’s also proof that the tests passing fully cover all the initial requirements laid out, therefore the product you asked for is built” (and of course if they don’t like what they got, they can go talk to the requirements engineer :P )

            1. 6

              Hi, greatest author here. :)

              Something that might be useful in your situation: with greatest, if you have formal requirements IDs, you could use them as tags in the test function names, and then run tests based on those – you can use the -t command line switch in the default test runner to run tests based on test name / substring match. (Similarly, -x skips matching tests, so, for example, you could skip tests with __slow in the name.) If you name tests, say, TEST handle_EEPROM_calibration_read_failure__ENG_471(void) { /* ... */ }, then ./tests -t ENG_471 would run that. (Running tests by name also helps with quick feedback loops during development.)

              I did some automotive embedded work several years ago. We had a whole requirement traceability system that involved scraping requirement IDs out of comment headers for tests, which eventually fed into coverage reports.

              1. 1

                Oh wow, that’s pretty cool. That tagging system can certainly be useful for more than just the requirement IDs but ya, that would work. Being able to filter tests by the tags is also really neat and I hadn’t thought of that as a feature.

          2. 1

            I’d be curious to see what someone could come up with if the test framework didn’t use the C preprocessor and used something else instead. Might be a fun exercise. But then again, maybe I’m just not liking the preprocessor lately.

            1. 1

              What would it look like to drive tests, for C programs, in say Lua? It seems like a wonderful idea, but I’m not sure if the boilerplate stuff can be automated in such a way to make it feasible…

              1. 1

                I’m not sure either, but it still might be an interesting exercise (or mini research project). Maybe I should be the one to look into it since I’m the one that spoke up. ;)

                1. 1

                  Actually, this sounds like something @silentbicycle has probably already tried. Might be worth checking in with him first. :)

          1. 16

            This is delicious on so many levels! Truly lobste.rs worthy

            1. It deals with compilers
            2. It has the true hacker spirit: create arbitrary - but in a trippy way sensible - constraints, then throw a ton of ingenuity to solve it. Why? Because we put it there!
            3. It has humor in it. Like tons!
            4. It’s secretly subversive of the stuffy world of academia - skillfully threading the line between sublime and scandalous
            5. The paper, the compiler and the compiler’s product is in plain text!
            6. It is SO WELL WRITTEN! Even though he writes “This one may be a bit impenetrable for non computer scientists.” I disagree - anyone with patience can understand it. It starts almost from first principles and takes you gently to soaring heights.

            I regret I have only one upvote to give.

            1. 6

              His other work is as good, if not better. He’s written NES AI (learnfun, playfun), turned 2D NES games into 3D worlds, computed every portmanteau, and even proved the undecidability of generalized kerning. His youtube channel has videos for these (and more insanity!). Also, he co-organizes SiGBOVIK, which celebrates all these “research” topics everyday around April 1st. It’s the only thing on the Internet around that time worth reading!

            1. 11

              I think I would have preferred the source code….

              1. 3

                I could go either way on this. On the one hand, our intellectual property laws are horrible, and the game is 20 yrs old, so who cares?

                But, on the other, I’d be pissed if I lost my camera and someone decided to dump the contents on imgur.

                I think the reason there is any debate around this is because the owner is a giant, and successful game corporation, which seemingly has nothing to lose from sharing the source. But if that were actually true, why wouldn’t they on their own terms?

                1. 9

                  Many game publishers would rather have their game rot into obscurity and make no profits than share the code. Abandonware is so common these days. I think it’s mostly rooted in a bad theoretical perspective of how the software market works.

                  1. 2

                    According to an IP lawyer friend of mine, software companies are often afraid that if their source gets out it will more likely be discovered that they accidentally infringed someone else’s IP in ways they weren’t even aware of.

                    1. 1

                      This is the reason for most of the NDA’s in the hardware industry. It’s a patent minefield. Any FOSS hardware might get taken down. I don’t know as much about the software industry except that big players like Microsoft and Oracle patent everything they can. A quick Google looking for video game examples got me this article. Claims included in-game directions, d-pad, and unlocking secrets but I haven’t vetted this article’s claims by reading the patents or anything.

                    2. 1

                      Many game publishers would rather have their game rot into obscurity and make no profits than share the code.

                      I think it comes down to thing, actually: Do you believe in the betterment of society (sharing), or do you believe in maximizing profits (greed)? In the last 20 years, we’ve seen this go from strictly white and black, to a full color spectrum. Blizzard, even Microsoft, are somewhere in the middle, but neither of them have shared much of their core, profit producing, products.

                      I think it’s mostly rooted in a bad theoretical perspective of how the software market works.

                      Can you clarify a bit? I think what you’re saying might be similar to what I’m thinking… that the media industries have not yet adapted from “copies” sold as a metric of success, despite tons of evidence and anecdotes suggesting other ways to success.

                      1. 1

                        We’re saying the same thing yes. It’s hard for businesses to realize that price discrimination can go down to $0 and you can still make a hearty profit.

                    3. 1

                      I bet there’s a lot of code in there that’s still heavily used in their games today, so probably not accurate to say they have nothing to lose.

                      1. 1

                        One would imagine! Though, the engines of 1998 vs. the engines of 2018 have probably changed quite significantly.

                  1. 5

                    I’m curious as to Google’s motivation for Fuschia: what does it offer over Linux?

                    1. 26

                      They’ll be able to use a license other than the GPL and have a stable ABI. This would please a lot of device manufacturers

                      1. 4

                        Isn’t the Linux ABI somewhat stable? Isn’t that a sticking point for Linus, not breaking userspace?

                        1. 12

                          The syscall interface is stable. The ABI for kernel modules/drivers is not.

                          1. 2

                            If this is the main objective, didn’t they “solve” this with Android’s new hardware abstraction layer?

                            Rebuilding from the ground up seems like a huge amount of work when we can build up piecemeal stuff pretty nicely

                            1. 9

                              I doubt the ABI chances have been solved with the /vendor partition. As I understand it, this change just allows for a clear separation between what is and isn’t the core Android system. Manufactures can still have binary blobs in /vendor and not release their customizations and plugins kept there. ABI breakage happens at the Kernel level, and can only be solved in the Android world if Google forced all manufactures to one standard Kernel for all devices.

                              The GPL is also something they can’t solve either. This is probably the saddest part of the Fuchsia project, and echos Google releasing Chrome after drumping so much money into Firefox. They support the OSS tool, but then dump them because they want their own control and licensing. Their new tool is still OSS, but with a license that’s better for them. I wrote about how companies embrace/use OSS a while back:

                              http://penguindreams.org/blog/the-philosophy-of-open-source-in-community-and-enterprise-software/

                      2. 11

                        We’ve seen a large number of Google devices coming to market (tablets, phones, notebooks). I wouldn’t be surprised if they were on their way to an “Apple”-like hardware mode where we have a 3rd, proprietary, but fully integrated environment to choose from. MSFT has the same sort of model going, with the Surface line of things, so it could be that we have an all out war between MSFT and Google for the next generation of business machines. I mean, look at the product lines:

                        • Office: Covered by both, with MSFT having the product to beat
                        • Email/Calendaring/Collaboration/etc: Exchange / Sharepoint vs Apps for Business
                        • Managed “IT” services: Windows Server (run yourself, with domains etc) vs. Apps for Business

                        Apple isn’t a threat to MSFT in this space, though it has a lot of similar, consumer grade, products.

                        Obviously, there’s a bootstrapping problem for Google here, but, I’m sure there’s nothing stopping the Fuchsia team from running a Dalvik VM, or making Android apps run seamlessly on it, and I’d fathom that that’s part of the plan for adoption, with Dart/Flutter being the answer to C#/.NET.

                        Google recently killed off Chrome Apps, so it seems that extending the Chromebook beyond running android apps, and Chrome Extensions (very limited as far as I can tell) will lead to an eventual death for the Chrome OS line of products.

                        So, you have the ability to take what you’ve learned from Chrome OS, integrate all the hosted “cloud” stuff seamlessly into a full feature operating system that has better native app abilities? Seems like a lot of potential to gain market share, especially, as they are doing so in the open, where “what’s this Fuchsia thing?” is being asked about, people can look at, play with it, and audit the design, source code, and, of course, write blog posts about how it’s broken. Basically, they’re in a really good position here, technically, and from a marketing standpoint. It markets itself! And jeesh! Think of all the baggage they are eliminating by starting fresh.

                        Black hats are salivating, and keeping track of all the errors that are being made in commits, just hoping that they become exploitable in some future, after aging a couple years without being touched….

                        1. 7

                          See the linked article in the article about how Google likes to build two of everything. I figure it’s essentially a hedge. If it turns out to be awesome in some way that Linux can’t be, they can move to it. If it turns out they need to do something they can’t do on Linux, they have this other OS all ready to go. If it turns out that it doesn’t do anything as well as Linux-based systems already can, then they aren’t really out anything but some money and engineer time, which they have plenty of. Google likes to do this in a number of domains to ensure that they stay on top no matter which way the industry goes.

                          1. 11

                            Linux has accumulated a lot of irrelevant features (attack surface) over time?

                            1. 4

                              They have the money and time to do it? Or perhaps they are tired of trying to make the fragmented android / linux ecosystem work for them amd want to take a new approach they can fully control. Ui-wise I’m happy to see that it seems to have a functional out-of-the-box app launcher ala Kiss, though I guess any ui component will massively change in the future.

                            1. 3

                              By this logic, nothing ever would have had to have been invented. At least if you carry it through to the end, it the way stated, not the way it was intended.

                              1. 12

                                This particular line of refutation and critique is probably the most common refrain I hear when this sort of article or sentiment is brought up. It’s also wrong–note the “maybe” in the post title.

                                Let’s not flatter ourselves: yet another “HTML DOM but with better syntax”, “jQuery but with cleaner syntax”, “HTML DOM but with databinding”, “Angular but with smarter data-binding this time”, “Angular but with version-breaking and typescript”, “HTML DOM but with better diffing”, “React but artisinal”, “React but artisinal but also angular”, is hardly invention in the sense you probably mean it.

                                1. 10

                                  Our use of common tools has forced us into fixing the things that bother us about them, instead of developing truly new ways of solving our problems. The common solutions don’t make us think, and destroy our ability to think outside the box.

                                  What would software be like if the free software movement never happened? Instead of “buying” loose fitting uniforms, I bet we’d all be excellent fabric makers, and tailors of original clothes that fit just right.

                                  1. 3

                                    And worse, now that we have too many tools to ever fix any of them, there is actually an entire generation of “developers” who simply have no capacity to write quality, durable code.

                                    What would software be like if the free software movement never happened? Instead of “buying” loose fitting uniforms, I bet we’d all be excellent fabric makers, and tailors of original clothes that fit just right.

                                    Some of us anyway.

                                    But unlike good clothing, most people cannot “see” code, so very few people appraise it’s quality – A lot of people actually think they’re paying for code, that somehow more code is more valuable.

                                    Weird.

                                    I actually welcome legislation that puts programmers and business on the hook legally (with proper teeth, like the GDPR promises to have) for their work, because I would like to always do good work, but I know I can’t do that while being competitive.

                                    1. 3

                                      And worse, now that we have too many tools to ever fix any of them, there is actually an entire generation of “developers” who simply have no capacity to write quality, durable code.

                                      This isn’t any different from how it used to be. For as long as we’ve had computers we’ve had people worried about developers writing bad, brittle code. The usual solution? High quality, well tested components we know are good, so that developers have fewer places to screw up.

                                      Not having to roll our own crypto is, on the whole, a good thing.

                                      1. 1

                                        And worse, now that we have too many tools to ever fix any of them, there is actually an entire generation of “developers” who simply have no capacity to write quality, durable code.

                                        You sound old and grumpy, it’s gonna be alright. I’ve seen old people and young generation alike write shitty (and good) code. At least by reusing existing components people might have an easier time to build systems or complex program relying on widely used and tested pattern.

                                        I actually welcome legislation that puts programmers and business on the hook legally (with proper teeth, like the GDPR promises to have) for their work

                                        How would such legislation going to encourage individuals from taking risk and rewrite their own components instead of reusing existing more tested and widely used ones?

                                        because I would like to always do good work, but I know I can’t do that while being competitive.

                                        If you need legislation to be able to market your good work, “maybe it’s you”.

                                        1. 1

                                          That probably results in more money for insurance companies but not better software.

                                          1. 4

                                            I’m confident if we are planning more, writing better specs, coding more carefully, focusing on reducing code size, and doing more user-testing, then software will be better.

                                            And there may always be a gap: As we learn where it is, we can probably refine those fines…

                                        2. 3

                                          What if I don’t want to be a tailor, though? I want to be a welder, but I can’t, because I spend all my time tailoring!

                                          Component programming has, historically, been the hoped-for solution to the software crisis. Parnas made that a central advantage of his work on modules, high-correctness software is predicated on using verified components, etc etc. It might not have lived to it’s standards, but it’s a lot better than where we used to be.

                                          Consider the problems you want to think about, and then consider how hard it would be to solve then if you had to write your own compiler.

                                          1. 2

                                            It might not have lived to it’s standards, but it’s a lot better than where we used to be.

                                            Hmm. Can you elaborate on why it’s better? I feel that in a lot of ways it’s worse!

                                            Consider the problems you want to think about, and then consider how hard it would be to solve then if you had to write your own compiler.

                                            We’ve trained ourselves to make a base set of assumptions about what a computer is, and has to be. A C compiler is just a commodity tool, these days. But, obviously, people have invented their own languages, and their own compilers.

                                            But, consider a very basic computer, and forth. Forth is simple enough that you can write very big functioning systems, in a small amount of code. Consider the VPRI Steps project that’s been attempting to build an entire computing system in a fraction of the code modern systems take. What would things look like, then?

                                            1. 1

                                              Hmm. Can you elaborate on why it’s better? I feel that in a lot of ways it’s worse!

                                              The most popular Python time library, Arrow, is 2000+ lines of core code and another 2000+ lines of localization code. If you tried to roll your own timezone library you absolutely will make mistakes that will bite you down the line, but Arrow is battle-tested and, to everybody’s knowledge, correct.

                                              Consider the VPRI Steps project that’s been attempting to build an entire computing system in a fraction of the code modern systems take. What would things look like, then?

                                              That report lists 17 personnel and was funded by a 5 million dollar grant. I don’t have that kind of resources.

                                              1. 2

                                                When was the last time you wrote code that required accurate timezones (UTC is almost always OK for what I do)? And, to be honest, 4,000 lines doesn’t seem like enough to be exhaustive here…

                                                But, I don’t disagree that there are exceptional things that we should all share.

                                                Just that, in the current state of things, relying on an external library responsibly, requires a deep understanding of it to use it properly. You can’t rely on documentation—it’s incomplete. You can’t rely on its tests—they don’t exhaustively prove it works. You can’t trust the names of functions—they lie, or at least have ambiguity. And, more often than not, you care about only a small percentage of the functionality, anyway.

                                                That report lists 17 personnel and was funded by a 5 million dollar grant. I don’t have that kind of resources.

                                                The point wasn’t “we should all go define 2,000 line systems that do everything.” It was, apparantly poorly, attempting to point out that there may have been another way to “compute,” that would have made rolling everything yourself more appropriate. I think it’d be pretty hard to go back to a place where that’s true—the market has spoken, and it’s OK with bloated, completely broken software that forces them to upgrade their computers every 3 years just to share photos in a web browser and send plain text email to their familes.

                                                1. 1

                                                  When was the last time you wrote code that required accurate timezones (UTC is almost always OK for what I do)? And, to be honest, 4,000 lines doesn’t seem like enough to be exhaustive here…

                                                  Maybe not timezones, but definitely https, authentication libraries, web scrapers, crypto, unit testing frameworks, standard library stuff…

                                                  I think it’d be pretty hard to go back to a place where that’s true—the market has spoken, and it’s OK with bloated, completely broken software that forces them to upgrade their computers every 3 years just to share photos in a web browser and send plain text email to their familes.

                                                  Right, but I’m asking historically if this was caused by the rise of component-based programming, as opposed to just being correlated with it, or even if it happened despite it! It’s really hard to prove a counterfactual.

                                        3. 0

                                          So… do you not believe in evolution, then?

                                          1. 1

                                            Thb, when I read “maybe it’s you”, I understand this as a stylistic device, and don’t read it literally. And I guess it depends on the situation, I totally agree with you than 99% of the “new” stuff invented for the web have no need to be created (which one could generalized to the whole economy if one would want to). I just want to say that there are situations where being open to new ideas wouldn’t be bad, because sometimes bad ideas are kept just because of a network effect.

                                            And if we’re already talking about what exactly was written (I should have clarified this, so it’s my fault), i was talking about the title. I know the text says something different, that’s why I said “not the way it was intended”.

                                            1. 2

                                              Author here. Thank you for your feedback! You’re right: the title may be construed as an accusative. For the record: it is not. I’ll take better care with such things going forward!

                                        1. 1

                                          How do you check if a CPU has this feature?

                                          1. 1

                                            From the OP in that thread: “The quickest way to check whether or not you have PCID is to grep for “pcid” in /proc/cpuinfo” – Obviously, this is Linux only.

                                          1. 3

                                            Projectwise more work on the TLA+ book and the UML history.

                                            Personalwise, I’m learning AutoHotKey and it’s AMAZING. The syntax is janky and the commands I make are brittle (lots of measuring distances in a window or tweaking sleeps), but oh my god the workflow improvements you get out of it. I’ve turned about a dozen annoying, fidgety gui interactions I regularly do and turned them all into hotkeys. This has pretty much killed my desire to go back to a Mac.

                                            1. 1

                                              I’d love to read about what you’ve scripted and why. I’m always filing a rough edge or two and am curious to see what others improve.

                                              1. 2

                                                Some of the stuff I’ve added so far:

                                                • The calculator button on my keyboard now opens a J interpreter instead. If I press it while one’s active, it just switches instead of opening a new app.
                                                • I made a hotkey that adds/removes the current song on Spotify to my library, regardless of which window I’m in. It’s really nice for working while listening to a radio station.
                                                • Hotstrings for my address and cold shower template.
                                                • I got the cold shower link with my “note taking” extension I’m building in AHK. In this case, I had a different browser window tagged as the “browser”, and then pressed a hotkey to grab the current url from that browser window and paste it into the current textbox (here). Still far from complete, but it’s really promising!
                                              2. 1

                                                Maybe not a stupid question, though it feels like one: Is there something preventing an AutoHotKey -“like” for Mac? macOS still has the AppleScript engine (as far as I’m aware). Seems as though it’d be possible to build a better, less janky language on that and tie it to “hot keys”? Disclaimer: I know of AutoHotKey, have a rough idea of what it’s about, but I don’t have, nor have I used a Windows machine in 10+ years, to play around with it.

                                                1. 1

                                                  To my understanding it’s mostly a “nobody’s really tried” thing. I’ve heard Keyboard Maestro is pretty good though.

                                              1. 2

                                                A competent CPU engineer would fix this by making sure speculation doesn’t happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.

                                                I feel like Linus of all people should be experienced enough to know that you shouldn’t be making assumptions about complex fields you’re not an expert in.

                                                1. 22

                                                  To be fair, Linus worked at a CPU company,Transmeta, from about ‘96 - ‘03(??) and reportedly worked on, drumrolll, the Crusoe’s code morphing software, which speculatively morphs code written for other CPUs, live, to the Crusoe instruction set.

                                                  1. 4

                                                    My original statement is pretty darn wrong then!

                                                    1. 13

                                                      You were just speculating. No harm in that.

                                                  2. 15

                                                    To be fair to him, he’s describing the reason AMD processors aren’t vulnerable to the same kernel attacks.

                                                    1. 1

                                                      I thought AMD were found to be vulnerable to the same attacks. Where did you read they weren’t?

                                                      1. 17

                                                        AMD processors have the same flaw (that speculative execution can lead to information leakage through cache timings) but the impact is way less severe because the cache is protection-level-aware. On AMD, you can use Spectre to read any memory in your own process, which is still bad for things like web browsers (now javascript can bust through its sandbox) but you can’t read from kernel memory, because of the mitigation that Linus is describing. On Intel processors, you can read from both your memory and the kernel’s memory using this attack.

                                                        1. 0

                                                          basically both will need the patch that I presume will lead to the same slowdown.

                                                          1. 9

                                                            I don’t think AMD needs the separate address space for kernel patch (KAISER) which is responsible for the slowdown.

                                                    2. 12

                                                      Linus worked for a CPU manufacturer (Transmeta). He also writes an operating system that interfaces with multiple chips. He is pretty darn close to an expert in this complex field.

                                                      1. 3

                                                        I think this statement is correct. As I understand, part of the problem in meltdown is that a transient code path can load a page into cache before page access permissions are checked. See the meltdown paper.

                                                        1. 3

                                                          The fact that he is correct doesn’t prove that a competent CPU engineer would agree. I mean, Linux is (to the best of my knowledge) not a CPU engineer, so he’s probably wrong when it comes to get all the constraints of the field.

                                                          1. 4

                                                            So? This problem is not quantum physics, it has to do with a well known mechanism in CPU design that is understood by good kernel engineers - and it is a problem that AMD and Via both avoided with the same instruction set.

                                                            1. 3

                                                              Not a CPU engineer, but see my direct response to the OP, which shows that Linus has direct experience with CPUs, frim his tenure at Transmeta, a defunct CPU company.

                                                              1. 5

                                                                frim his tenure at Transmeta, a defunct CPU company.

                                                                Exactly. A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition. What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                1. 11

                                                                  What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                  This is a bit of a logical stretch. Quite frankly, Intel took a gamble with speculative execution and lost. The first several years were full of erata for genuine bugs and now we finally have a userland exploitable issue with it. Often security and performance are at odds. Security engineers often examine / fuzz interfaces looking for things that cause state changes. While the instruction execution state was not committed, the cache state change was. I truly hope intel engineers will now question all the state changes that happen due to speculative execution. This is Linus’ bluntly worded point.

                                                                  1. 3

                                                                    (At @apg too)

                                                                    My main comment shows consumers didnt pay for more secure CPU’s. So, that’s not really a market requirement even if it might prevent costly mistakes later. Their goal was making things go faster over time with acceptable watts despite poorly-written code from humans or compilers while remaining backwards compatible with locked-in customers running worse, weirder code. So, that’s what they thought would maximize profit. That’s what they executed on.

                                                                    We can test if they made a mistake by getting a list of x86 vendors sorted by revenues and market share. (Looks.) Intel is still a mega corporation dominating in x86. They achieved their primary goal. A secondary goal is no liabilities dislodging them from that. These attacks will only be a failure for them if AMD gets a huge chunk of their market like they did beating them to proper 64-bit when Intel/HP made the Itanium mistake.

                                                                    Bad security is only a mistake for these companies when it severely disrupts their business objectives. In the past, bad security was a great idea. Right now, it mostly works with the equation maybe shifting a bit in future as breakers start focusing on hardware flaws. It’s sort of an unknown for these recent flaws. All depends on mitigations and how many that replace CPU’s will stop buying Intel.

                                                                  2. 3

                                                                    A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition.

                                                                    Tons of products over the years have failed based simply on timing. So, yeah, it didn’t meet the market demand then. I’m curious about what they could have done in the 10+ years after they called it quits.

                                                                    might not tell him much about constraints Intel faces.

                                                                    I haven’t seen confirmation of this, but there’s speculation that these bugs could affect CPUs as far back as Pentium II from the 90s….

                                                                2. 1

                                                                  The fact that he is correct doesn’t prove that a competent CPU engineer would agree.

                                                                  Can you expand on this? I’m having trouble making sense of it. Agree with what?

                                                            1. 3

                                                              Trackbacks are back!

                                                              Serious question: What is better about Webmentions?

                                                              1. 5

                                                                The protocol is slightly simpler, but otherwise *back, *mention, they all boil down to the same function.

                                                                Webmention is often paired with other good things, like microformats support, but of course that’s a separate thing.

                                                                1. 1

                                                                  Ah! Thanks! I’m very curious why there’s a brand new spec, with new name, that only slightly deviates in functionality… but this is (probably) a social issue that goes way beyond this spec in particular.

                                                                  1. 5

                                                                    It really is much simpler. Instead of messing around with XMLRPC it’s just a single HTTP POST request. it does a really good job reusing parts of the web too, so it just feels more webby. I found it extremely easy to implement :-)

                                                                    1. 4

                                                                      Oh, great! I’ll have to read it in more detail. It’s been a long time since I thought about XMLRPC, and completely forgot that Trackback relied on it.

                                                                2. 5

                                                                  Trackback is the oldest one. It did not say anything about verifying that the source actually links to the target, making spam extremely trivial.

                                                                  Pingback is the Wordpress XML-RPC one. Pretty much no one implemented useful presentation of pingbacks, just Wordpress’s useless default snippets. Eventually got overrun with Wordpress spam (with actual links).

                                                                  Webmention comes from a community of people who actually care about this stuff, which leads to useful presentation of different types of responses, anti-spam solutions, propagating responses-to-responses back to the source, integration with Twitter/Facebook/etc., interaction with Mastodon/Hubzilla

                                                                  And there’s Linked Data Notifications from people who believe that this is the Year of RDF on the Web :)

                                                                1. 1

                                                                  `[Property-based testing] is a high-investment, high-reward kind of deal.’ I can’t think of a more Erlang-y notion than that (in a good way).

                                                                  1. 3

                                                                    I do stand by that comment. In the (current) preface I mention that I’ve spent years toying with property-based testing [on and off] without feeling competent. I’ve seen talks about it, asked questions to people who knew more than I do, and a lot of the examples – toy examples – always felt very limited or always highlighting some basic bug (0.0 =/= 0 but Erlang allows both!) Stateless tests are fairly okay to test, but stateful ones were more of a challenge.

                                                                    Eventually I decided to take a deep dive and test non-trivial projects with it [albeit personal toy projects] until I could find ways to deal with asynchronous code, complex outputs and inputs, and figure out, with limited outside help, what seemed to stick or not.

                                                                    I’m hoping that this ‘book’ helps share my experience and therefore makes the investment required lower, but truth is that there is a kind of habit that you get in “thinking in properties” similar to what you get in programming languages or specific paradigms, and that takes practice. There are tips and tricks, but you only think comfortably within a paradigm once you’ve forced yourself to hit and go through a few walls with it.

                                                                    1. 1

                                                                      but truth is that there is a kind of habit that you get in “thinking in properties” similar to what you get in programming …

                                                                      I find that the things that make good contracts also happen to overlap with good properties to test. That doesn’t solve the “properties are hard”, but does reframe the problem in a way that’s often, in my (rather limited) experience, easier to reason about.

                                                                  1. 0

                                                                    I’m still just shocked that they find so much value in Django that it is worth spending large amounts of engineering resources hacking up CPython instead of migrating to another language all together.

                                                                    1. 7

                                                                      They are not only finding value, they are also sensible about risk management and making Python better.

                                                                      No hipsterism, only doing a good job, is best.

                                                                      1. 0

                                                                        Hmm… are the changes they are making to Python that beneficial to the masses?

                                                                        I don’t think it’s “hipsterism” to reevaluate whether or not you’re still getting all the benefits you once were from a language/framework once in a while—that’s sensible, to me. Maybe they are doing that, though. They just start every post like this with vanity “we are the biggest django app ever,” which makes it harder to believe.

                                                                        And, there are certainly sensible ways to derisk rewriting pieces of a system, and they likely are already using some of them to test their experimental changes to the foundational stuff they mess with (CPython).

                                                                        1. 1

                                                                          Maybe it’s not vanity but perspective into the risk they’d take in migrating. Going for whatever the others are using is bound to be harder to test than a CPython fork, or what do you think?

                                                                          What does it matter if the changes are beneficial to the masses or not? Is the value of a contribution measured in customer reach?

                                                                          1. 1

                                                                            Going for whatever the others are using is bound to be harder to test

                                                                            See, this is the problem. I’m not suggesting they follow anyone, and I am not sure why you’re assuming that just because a rewrite happens it’s because someone is chasing a new fad. There are many different paths for them to evaluate, and some even have a pretty interesting story.

                                                                            Suppose they adopted Jython (though, I have no idea what CPython version tracks anymore) for a while, they could start to convert poorer performing code with Java and utilize the interop.

                                                                            They could also adopt microservices and use Go, or Clojure, or Scala, or C++14 Services that speak Thrift.

                                                                            The point I am trying to make here is that they are basically operating a fork of CPython (the masses comment) and though some stuff, like this, might get merged back, forks inevitably require more maintenance, and resources to deal with. Patches need to be applied, tested, released, etc, etc, etc.

                                                                      2. 4

                                                                        I think the answer is that it is not large amounts of engineering resources. If you look at CPython pull request in the article, it is a trivial amount of code (less than 200 lines).

                                                                        1. 2

                                                                          Never underestimate how long it takes to write, test, rewrite, test, etc. and submit a 200 line PR. This may have taken months, and the thanks section mentions contributions and discussions from multiple people. I don’t think it took a year, as the post says it took a while to lose the efficiencies the previous GC post (January ‘17) created, but a couple of months doesn’t seem unreasonable for something like this.

                                                                          1. -1

                                                                            And you think rewriting their whole codebase in a different language is going to take less than a couple months? ;-)

                                                                            1. 0

                                                                              Did I say that?

                                                                        2. 2

                                                                          Same reason COBOL supports Unicode. You can probably hack up CPython with a couple of engineers while the rest of the engineering team Moves Fast,* while migrating to another language is a serious undertaking that deeply affects all of engineering. And even if you could magically build the new system and migrate everything overnight, you’re still left with all these senior Python programmers who have to learn a completely new language. In the long term it might be a better option, but in the short term it’s a terrible idea.

                                                                          *And for all we know, this could be a stopgap while another team preps for a language migration.

                                                                        1. 1

                                                                          I was disappointed, and confused that this wasn’t an open Game Genie, clone, but Age of Empires was fun, too.

                                                                          1. 8

                                                                            I can’t understand why tech interviews involve implementing some sort of tree, regular expression engine, sorting algorithm or parser

                                                                            I don’t have the history to back it up, but I strongly suspect tree and linked-list questions are a holdover from a time where almost everything was done in C, and you had to know how to hand-roll data structures to get anything done. If that’s the case, then it would have been a “do you know how to use the language” question, not a “can you come up with new algorithms on the spot” question.

                                                                            1. 10

                                                                              Specifically, programming in C requires understanding pointers, and the difference between and object and a pointer to an object, and a pointer to a pointer. These distinctions are essential to solving any problem in C. Basic data structures are a simple self contained problem that involves pointers. The linked list question is not about linked lists. It’s about understanding why your append function takes a node star star.

                                                                              1. 2

                                                                                Basic data structures are a simple self contained problem

                                                                                (I agree with your whole assessment, but find this part particularly compelling.)

                                                                                If you interview for a position that involves programming, you are ultimately going to be forced to solve problems—sometimes brand new problems that have not been solved before. So how does one assess a person’s ability to do that? You can’t give someone a problem that’s never been solved before…

                                                                                I don’t know. The thing that I find most awful about data structures questions is that the distribution of those with knowledge is normal, so it’s either impossible, derive it from scratch, or memorized/almost rehearsed because the candidate just knows it.

                                                                                The best questions I’ve had have been generally open ended. There might be a data structure that solves it well that the interviewer has in mind, but there are also 10 other ways to solve the problem, with different levels of sophistication, that would probably be good, in practice, at least until hockey stick growth hits. The best interviewers are open minded, and good enough on their feet to adapt their understanding of the problem to a successful understanding of a potential solution that the candidate is crafting.

                                                                                Maybe the fact that algorithms, and data structures have but one answer is the actual drawback… hmmm.

                                                                                1. 3

                                                                                  WRT the normal distribution, I think modern interviewing has forgotten that such questions aren’t supposed to be textbook tests and have drifted away from reasonable questions. Even if you’ve never heard of a binary tree, I can explain the concept in 30 seconds such that you should be able to implement search and insert. (Rebalance may be harder.) I can’t argue it won’t be easier if you’ve done it before, but it should never be impossible.

                                                                                  1. 4

                                                                                    That’s probably true. But the concept isn’t useful without intuition about how to apply it, and I think that’s part of the problem, too. Often, criticism of these questions is that “libraries already have a binary tree, and I’ll just use that.” I think it’s likely that these types of interview questions poorly proxy the question of: “does this person know when to use a binary tree? They must if they know how to implement one!”

                                                                                2. 1

                                                                                  They are asked such questions to differentiate between a flood of new programmers each without experience. Such aptitude type questions can be be automated as a way to cull the majority of applicants.

                                                                                  Sad that they are being applied across the board regardless of experience, even though nearly all experienced programmers have never had to actually implement a CS/text book algorithm, preferring instead to use the tried, tested and optimized implementation in a language’s base class library, or those readily available elsewhere.

                                                                                  1. 1

                                                                                    But trivia about algorithms and data structures says practically nothing about experience.

                                                                                    EDIT: Nevermind, I just read your post again.

                                                                                    Still, I feel like focusing on more realistic problems could be a much better predictor of aptitude.

                                                                              1. 3

                                                                                This is cool, and I am going to sound like a jerk, but, the big and really important aspect of the curl project is lib curl, not the curl command, which this doesn’t address. Given the choice of language, it doesn’t seem like you’re going after that which is a bit disappointing. Cool project, though! I’ll put it on my list of clients to mess around with.

                                                                                1. 1

                                                                                  You’re totally right; you don’t sound like a jerk at all. I keep thinking of how to address this massive short fall.

                                                                                  I’m open to ideas and contributions too.

                                                                                  Thank you, sincerely, for the comment.

                                                                                  :-)

                                                                                1. 7

                                                                                  Cloud icons for weather only have three lumps. Even the iPhone weather app does. But without context, it might look like a bowler hat or something.

                                                                                  1. 1

                                                                                    A bowler hat? I’ve never been to a bowling alley where anyone wore a hat.

                                                                                    But yeah, I could see how it might be an icon for a designer hat store app.

                                                                                  1. 4

                                                                                    First, to call itself a process could [simply] execute /proc/self/exe, which is a in-memory representation of the process.

                                                                                    There’s no such representation available as a file. /proc/self/exe is just a symlink to the executable that was used to create the process.

                                                                                    Because of that, it’s OK to overwrite the command’s arguments, including os.Args[0]. No harm will be made, as the executable is not read from the disk.

                                                                                    You can always call a process with whatever args[0] you like. No harm would be done.

                                                                                    1. 4

                                                                                      Although /proc/self/exe looks like a symbolic link, it behaves differently if you open it. It’s actually more like a hard link to the original file. You can rename or delete the original file, and still open it via /proc/self/exe.

                                                                                      1. -4

                                                                                        No harm will be made, as the executable is not read from the disk.

                                                                                        the executable is definitely read from the disk

                                                                                        Again, this was only possible because we are executing /proc/self/exe instead of loading the executable from disk again.

                                                                                        no

                                                                                        The kernel already has open file descriptors for all running processes, so the child process will be based on the in-memory representation of the parent.

                                                                                        no that’s not how it works, and file descriptors aren’t magic objects that cache all the data in memory

                                                                                        The executable could even be removed from the disk and the child would still be executed.

                                                                                        that’s because it won’t actually be removed if it’s still used, not because there’s a copy in memory

                                                                                        <3 systems engineering blog posts written by people who didn’t take unix101

                                                                                        1. 12

                                                                                          Instead of telling people they are idiots, please use this opportunity to correct the mistakes that the others made. It’ll make you feel good, and not make the others feel bad. Let’s prop up everyone, And not just sit there flexing muscles.

                                                                                          1. 3

                                                                                            Sorry for disappointing you :)

                                                                                            I got that (wrongly) from a code comment in Moby (please check my comment above) and didn’t check the facts.

                                                                                            1. 2

                                                                                              I’m not saying that the OP was correct, I’m just saying that:

                                                                                              /proc/self/exe is just a symlink to the executable

                                                                                              ,,, is also not completely correct.

                                                                                          2. 3

                                                                                            Thanks for pointing out my mistakes! I just fixed the text.

                                                                                            I made some bad assumptions when I read this comment [1] in from from Docker and failed to validate it. Sorry.

                                                                                            By the way, is it just by bad English or that comment is actually wrong as well?

                                                                                            [1] https://github.com/moby/moby/blob/48c3df015d3118b92032b7bdbf105b5e7617720d/pkg/reexec/command_linux.go#L18

                                                                                            1. 1

                                                                                              that comment is actually wrong as well?

                                                                                              I don’t think it’s strictly correct, but for the purpose of the code in question it is accurate. That is, /proc/self/exe points to the executable file that was used to launch “this” process - even if it has moved or been deleted - and this most likely matches the “in memory” image of the program executable; but I don’t believe that’s guaranteed.

                                                                                              If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes.

                                                                                              1. 3

                                                                                                but I don’t believe that’s guaranteed.

                                                                                                I think it’s guaranteed on local file systems as a consequence of other behavior. I don’t think you can open a file for writing when it’s executing – you should get ETXTBSY when you try to do that. That means that as long as you’re pointing at the original binary, nobody has modified it.

                                                                                                I don’t think that holds on NFS, though.

                                                                                                1. 1

                                                                                                  If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes

                                                                                                  Actually, scratch that. You won’t be able to write to the executable since you’ll get ETXTBUSY when you try to open it. So, for pretty much all intents and purposes, the comment is correct.

                                                                                                  1. 1

                                                                                                    Interesting. Thank you for your insights.

                                                                                                    In order to satisfy my curiosity, I created this small program [1] that calls /proc/self/exe infinitely and prints the result of readlink.

                                                                                                    When I run the program and then delete its binary (i.e., the binary that /proc/self/exe points to), the program keeps successfully calling itself. The only difference is that now /proc/self/exe points to /my/path/proc (deleted).

                                                                                                    [1] https://gist.github.com/bertinatto/5769867b5e838a773b38e57d2fd5ce13

                                                                                              1. 10

                                                                                                Cool that you went and did this! I built @technomancy’s atreus a while back, but don’t actually use it. I should, though…

                                                                                                1. 3

                                                                                                  Thanks - that’s a very cool looking keyboard!

                                                                                                  1. 1

                                                                                                    Why don’t you use it?

                                                                                                    1. 4

                                                                                                      The reason I don’t use it is simply because I don’t want to become dependent on it. @technomancy travels everywhere with his, and sets it up on top of his laptop keyboard. I could try that, I suppose, but it seems like a habit that’d be very hard to get into. Above all, I don’t have pain from regular laptop keyboards, so the increased ergonomics haven’t pushed me into it by necessity.

                                                                                                      But, now that I’m saying this, I really should give it more of a chance, and try it again… There’s no reason not to, for sure.

                                                                                                      1. 3

                                                                                                        I don’t think learning a new keyboard will prevent you from using your laptop keyboard.

                                                                                                        I switch freely between a maltron 3d and a thinkpad keyboard. The biggest challenge is learning the new keyboard in the first place (about 2 months for the maltron)

                                                                                                        1. 1

                                                                                                          You’re right, it doesn’t stop me from using a different keyboard. I spend enough time away from my desk, though, that I feel I’d have to bring it with to ever get comfortable with it.

                                                                                                  1. 3

                                                                                                    Since NetBSD gets less credit, I’ll add that the things that make it portable also made it get a lot of use in embedded and CompSci. Basically, spots where customization happens at the kernel code level in a lot of places. The folks crazy enough to try to rewrite a UNIX in a new language also always started with NetBSD since it’s easiest to rewrite. None are easy, mind you, but they didn’t think they’d have a chance with FreeBSD or OpenBSD for different reasons.

                                                                                                    1. 1

                                                                                                      How portable is NetBSD, compared to the other BSDs?

                                                                                                      1. 3

                                                                                                        I have no idea haha. I’ve just read up on a lot of work by people building on various OS’s. In UNIX land, those building on NetBSD seemed to have an easier time. Here’s three things many said:

                                                                                                        1. It wasn’t huge like FreeBSD or Linux.

                                                                                                        2. Its focus on portability made it easier to change.

                                                                                                        3. Its community was welcoming or helpful to those trying to rewrite it for various reasons.

                                                                                                        I’m not sure how true these are in general or currently since I don’t code BSD/Linux kernels or userlands. I just kept seeing a lot of people doing BSD mods say those things about NetBSD specifically. Probably better for developers of various BSD’s to chime in at this point since we have at least three represented in Lobsters community.

                                                                                                        1. 4

                                                                                                          The NetBSD rump kernel concept is one real-world demonstration that the abstraction layers are at least fairly good. It’s not clear you couldn’t do something similar with another OS, but NetBSD seems to have managed it with quite little actual porting needed (drivers run completely unmodified).

                                                                                                        2. 2

                                                                                                          They used to joke that it even runs on your toaster. Obviously, most toasters don’t have an embedded OS, but, I think the joke implies something about how portable they desire it to be.

                                                                                                          1. 5
                                                                                                            1. 2

                                                                                                              Ha! I hadn’t seen this, but I’m certainly not surprised!

                                                                                                            2. 3

                                                                                                              Obviously, most toasters don’t have an embedded OS

                                                                                                              Yet.

                                                                                                              The obvious use case for such a device is cryptocurrency mining.

                                                                                                              1. 5

                                                                                                                The obvious use case for such a device is cryptocurrency mining.

                                                                                                                Yep, This should generate enough heat to burn a few toast :+)

                                                                                                        1. 17

                                                                                                          Easy fix: Don’t complain about existing emails, simply convert the signup into a password reset email and if usernames aren’t public don’t complain or even ask for one until a confirmation mail has been received.

                                                                                                          Username or Email disclosure can be very bad for privacy.

                                                                                                          1. 1

                                                                                                            This just delays disclosure. Eventually, the sign up is gonna fail because the username/email exists. Even if you push it later after an email.

                                                                                                            1. 8

                                                                                                              Not necessarily. The signup could only continue with access to the targets email service. The attacker doesn’t know if the attack succeeded or not unless they have access to the email address. If that is the case, the entire point is moot anyway since you can click “reset password”.

                                                                                                              1. 2

                                                                                                                Let’s say I’m an attacker. I sign up on lobste.rs at hacker@gmail.com, sign up works and I get a confirmation email. I then sign up on lobste.rs with the email tscs37@gmail.com. Because your account exists already, I don’t get an email. The difference, even if it’s something not happening, tells me what I wanted to know.

                                                                                                                1. 12

                                                                                                                  Yeah but you don’t know if you get an email for tscs37@gmail.com since you don’t have access to that account.

                                                                                                                  You wouldn’t get an email if the account existed or not.

                                                                                                                  1. 0

                                                                                                                    I do know though. I know I didn’t get an email for tscs37@gmail.com. That tells me just as much about the account’s existence if I had got the email.

                                                                                                                    1. 14

                                                                                                                      If you get that mail that implies you have control over tscs37@gmail.com, in which case the entire method doesn’t do anything. But it’s also not valid to say the entire method is therefore flawed.

                                                                                                                      You just gained access to my mail account, you can click “reset password”. Or signup for real. Or just check the emails in the archive folder.

                                                                                                                      If you do not have access to tscs37@gmail then you cannot tell if the email had signed up because you don’t get the email

                                                                                                                      1. 2

                                                                                                                        Ah I see what you’re saying now. Yeah that might work, it wouldn’t work for sites that have usernames, users could only be identified/login with email addresses.

                                                                                                                        1. 2

                                                                                                                          It can work with sites that have usernames, but only for display purposes.

                                                                                                                          So you can still be shown as “travisjeffery” or “ilikedeepthreads” or whatever rather than showing your email all over the place, but thats all it is - display text.

                                                                                                                        2. 1

                                                                                                                          This also takes the step of validating that an email address is active during signup, instead of after signup, which is a huge win, in and of itself.

                                                                                                                        3. 1

                                                                                                                          It tells you that somebody already registered that email address. It does not tell you if that somebody also has a lobsters account.

                                                                                                                      2. 2

                                                                                                                        You didn’t get any email because you don’t control the email account to be able to check it.

                                                                                                                        1. 1

                                                                                                                          Lobsters isn’t a good example for average case because it has a list of usernames. So, you can tell if the name is there. You might not know if it’s the target individual, though. Then, clicking on the profile may or may not tell you more about the individual. You can also social engineer users to find out identity information you might not get from a commercial service without more work and risk.

                                                                                                                          I do agree with you that it’s pointless to hide the username on a site where (a) you can see if it’s take and critically (b) username has something like email that uniquely identifies an individual. Other examples might be their Facebook or Twitter accounts. I’d still default on hiding it in the likely-reusable implementation of login since I can’t know ahead of time whether or not they’ll leak information like that. Secure by default.

                                                                                                                          1. 1

                                                                                                                            If email address was used for login (and usernames only for display) there’d be minimal attack surface there.

                                                                                                                    2. 1

                                                                                                                      This is definitely a good way to prevent those issues. For several times I thought about using this method on some projects.