1.  

    This looks like they are describing Nim.

    1.  

      I see the ask as Linear ML, which is largely what I think of Rust being to begin with… :)

      1.  

        Araq’s write-up makes me think they’re going in a different direction than Rust’s foundation. I don’t track it closely, though.

      1. 3

        Every external dependency is a liability. If your cloud provider messed up and it impacted your customers, it’s still your responsibility. It’s a risk you took when deciding to rely on them.

        1. 6

          They can still provide individualized data so you can reevaluate this risk over time.

        1. 0

          I agree and disagree at the same time. I mean, yes, my provider uptime isn’t the same as my uptime, because how it could be. But at the same time you need to monitor only whether services you are providing is “up and running”, you cannot check and monitor all of your customer services (especially when you run code written by them). You need to set clear boundaries where is “you” and where is “provider” and you can only be mad if “provider” didn’t delivered what they said, but you cannot require “provider” to manage your problem only because you are their clients.

          1. 7

            I am not sure the argument is “provider needs to monitor my app.” The argument is that “provider needs to give me better insight into the services my app relies on so I can tell if my app is fucked, or provider isn’t providing service.”

            These are distinct.

            1. 5

              In the mainframe and NUMA markets, the big machines usually had so-called RAS features that did things like monitor for reliability. Then, they could fix problems or even move apps somewhere else ahead of the big problem by detecting a pre-condition. The vendors of actual five-9’s systems like NonStop and Stratus also provide tooling like that.

              Then, the commodity clouds show up saying they have five 9’s except you won’t get those 9’s using the service. Seems misleading. Plus, they can probably spot failures and alert customers if their forerunners from decades before could do it. They’re just not doing it. Another reason to consider one of the older, HA-for-real solutions if one is doing something needing high 9’s.

            1. 2

              I am playing with expression parsers and unit conversions in elisp, as an exercise in building a little Soulver toy. It’s been a good reason to refresh my memory on RPN and the Shinting Yard algorithm, and a few other things…

              1. 1

                Couldn’t help but notice how Brad Fitzpatrick’s commits to memcached and go have complementary hours. This guy codes all day(?

                1. 1

                  Does he still work on memcached? I assumed that was historical…

                  Edit: indeed

                  1. 2

                    I don’t believe he actually codes all day long, definitely there was a behavioral change from one to another. For instance, this could also be analyzed in a range of dates which may also be interesting.

                    Fear metadata.

                1. 6

                  In the initial moments of the outage there was speculation it was an attack of some type we’d never seen before.

                  I am as guilty as anyone, but it’s really interesting that we create these far-fetched explanations when something goes wrong instead of assuming “the last deploy broke something,” which statistically is far more likely.

                  1. 7

                    Well, I don’t know. Cloudflare is seeing so many attacks, of considerable scale, everyday, that it’s probably more common for them to get monitoring going crazy because of attacks than someone pushing code.

                    Although, I do agree that not considering this option right away means that they’re biased in some way towards blaming external actors.

                    1. 4

                      This is a great point! Their statistics for incident causes are likely not the same as most companies, but I’d still expect to see “bad deploys”/human error heavily represented.

                      I guess the other important point is that they are probably used to bad deploys being caught earlier with gradual rollouts. The size of the impact for this incident is atypical for a bad deploy…

                      1. 10

                        I’m an SRE at Cloudflare. Most deploys to our edge are handled by us so we generally know what’s been deployed and when. As the post mentions we use the Quicksilver key-value store to replicate configs near-instantly, globally, and these config changes are either user data or changes made by us as part of a release. The WAF is unique here in that it’s effectively a code deploy performed through the key-value store, and is performed by the WAF engineers directly, via CI/CD, not us.

                        So yeah, when this outage started we weren’t immediately aware that a WAF release had recently gone out, but generally we wouldn’t want to know - they do it frequently so generally it’d be noise to us if we had notifications for it. This is one of the things that lead to a few minutes’ delay in identifying the WAF as the cause of the issues, but we had excellent people on-hand who used tools like flamegraph generators to identify the WAF in only a few minutes.

                        It’ll be interesting to see how we change the deployments.

                  1. 2

                    I wonder how fast it is compared to V8. I don’t know of any published numbers for how fast V8 runs the ECMAScript Test Suite, which was the main metric provided in this post.

                    1. 3

                      I assume V8 is much faster since it JITs.

                      1. 4

                        JITs help most with repetitive / tightly looped code. I don’t think that’s the common case for JS. Certainly it’s an important case for some types of applications, e.g. I’m sure Google Sheets couldn’t handle large spreadsheets without a JIT. But I’m willing to bet the majority of websites see no measurable benefit from V8’s JIT. So I’m much more interested in comparing speed evaluating the ECMAScript Test Suite than, say, rendering the Mandelbrot set.

                        1. 2

                          These days V8 has an interpreter to aid fast startup and to avoid doing unnecessary work for code that’s only run once or twice. Given the effort the various JS engines have made over the past 15 years or so in improving performance of real world JS I generally trust they’re doing what they can.

                          1. 2

                            Right, I’m not saying I think QuickJS might beat V8. I’m just wondering how close it comes. 10% of V8 would not impress me, but 80% (for JIT-unfriendly workloads) would be a significant achievement.

                            1. 2

                              Folks reported it is closer to 3%

                              1. 2

                                Wait, as in 3ms in v8 takes 100ms in QuickJS (eg. 97% slower)? Or, what takes 97ms in V8 takes 100ms (eg, 3% slower)?

                                My guess, given Peter’s framing is the former…

                                1. 4

                                  300µs startup and teardown time is pretty quick though. On my MacBook Pro nodejs takes 40ms wall time to launch and stop.

                                  node <<< ‘’ 0.04s user 0.01s system 91% cpu 0.058 total

                                  So for quick scripts where the wall time would be dominated by those 40ms, QuickJS would win. That immediately makes me think of cloud serverless scripts (Google Cloud Functions, AWS Lambda, Azure Functions).

                                  I’m also curious about @st3fan’s 3% figure, what people? And where? But it seems plausible to me.

                                  1. 2

                                    It’s not a fair comparison though. Node is a framework, it’s not a JS engine. Try comparing with d8, which is the v8 shell.

                                    For instance:

                                    TIMEFORMAT='%3R'; time (./qjs -e '')
                                    0.007
                                    TIMEFORMAT='%3R'; time (v8-7.6.303 -e '')
                                    0.031
                                    TIMEFORMAT='%3R'; time (node <<< '')
                                    0.069
                                    

                                    Still a big difference between v8 and quickjs obviously, but now we’re not looking at how long node takes to load the many javascript files that it reads by default (for instance to enable require). :)

                    1. -2

                      Sorry, but “a simple prompt” this is not.

                      export PS1=\u@\h \w \$ is simple. Anytime you get PROMPT_COMMAND involved, it’s no longer simple.

                      1. 13

                        I don’t think pedantics over who thinks what is simple or not are really that productive or useful, especially when someone’s showing off something they just made and are proud of. Simple is a subjective term, and subjectively I think this comment sucks.

                        1. -1

                          I don’t think pedantics over who thinks what is simple or not are really that productive or useful, especially when someone’s showing off something they just made and are proud of.

                          This attitude is exactly what I think is wrong in this industry. We celebrate complexity, and then complain when complex things are hard to use, insecure, or broken.

                          Simple is a subjective term, and subjectively I think this comment sucks.

                          A simple downvote in disagreement would suffice.

                          1. 5

                            Please do not use downvotes to disagree. There intentionally is not a “Disagree” reason.

                            1. 0

                              Presumably I am being downvoted, but too early to see why. I assume “incorrect” or “troll.” In the case of “incorrect,” where correctness is subjective, I assume people mean that as disagree. :shrug:

                            2. -1

                              This attitude is exactly what I think is wrong in this industry.

                              Yes, Some random person making a fat executable to reduce dynamic calls on their terminal prompt is what’s wrong with the industry. Carry on, white knight of software!

                              A simple downvote in disagreement would suffice.

                              Obviously not.

                              1. -1

                                Carry on, white knight of software!

                                Oh, brother. You miss my point and suggest I have a problem with the author and then start name calling. Good one.

                                Obviously not.

                                ??? You didn’t change my opinion, told me my comment sucked (in your subjective opinion) and you certainly didn’t add anything to the conversation. A simple downvote would have sufficed.

                          2. 6

                            ah i feel you. by simple, i mean, simple design.

                            take a look at these popular ‘powerline’ prompts that throw tons of information at you. more often than not, you dont need to know how many files have changed in your current directory.

                            ill admit, building a 300 line program for your prompt is not simple at all!

                            1. 3

                              I am glad you didn’t take offense with my comment, as it was not the intention. I am very much in favor of side projects and sharing work—thanks for sharing!

                              take a look at these popular ‘powerline’ prompts that throw tons of information at you.

                              I am aware of how crazy these things get. I do appreciate the attempt at reducing that complexity to a more manageable form.

                          1. 4

                            Something like this has been on my todo list for a long time. My idea was to rebuild dwm using Racket and the Racket FFI, but I like this approach far better. The author seems to be shipping the barest of functionality, with the hope of users building their own experiences. Not sure how many will take that on – kind of a small community of people who work in scheme and want this level of customization of their window managers, I’d guess.

                            1. 4

                              There’s a sizeable community of users that use similar setups (see vhodges’ comment, although these setups are usually configured with multiple programs and shell scripts).

                              Considering AwesomeWM has a sizeable user base (and it’s configured in Lua), I think Xlambda has a place to exist :D

                              And I hope it becomes good enough that it leads people to be interested in Scheme, like sometimes Awesome helps people get interested in Lua, or like Emacs got some people started with Lisp. I think that may be a bit too much of a tall order, but I think that with enough persistence at making Xlambda more and more versatile, I think such a community might start growing.

                              1. 2

                                If you want to use AwesomeWM but still want to use a lisp, you can use Fennel to write lisp that compiles to Lua!

                                1. 2

                                  I’ve been suggested that before, but I have other goals that don’t exactly match with lua interop.

                                  Thanks for the suggestion, though!

                                2. 1

                                  Considering AwesomeWM has a sizeable user base (and it’s configured in Lua), I think Xlambda has a place to exist :D

                                  I, of course, believe it can exist. The question is really “how many of these users are unhappy enough to make a switch?”

                                  And I hope it becomes good enough that it leads people to be interested in Scheme, like sometimes Awesome helps people get interested in Lua, or like Emacs got some people started with Lisp.

                                  I hope you are successful! It’d be great to grow a larger scheme community.

                                3. 1

                                  You might be interested in WindowChef (https://github.com/tudurom/windowchef) A scriptable window manager. It needs a few pieces to make it work:

                                  I’ve got it setup with multiple desktops (ie one for each topic/task) each with one or more windows stacked at the same position and size - kind of a psuedo monocle mode.

                                  Control tab cycles through the stack and Control-Left and Right take me to the previous and next desktop respectively (I generally only use 3-4 desktops).

                                  Right now I use a hotkey to set the location and size of new windows (Control Home) but want to install and configure ruler to do that automatically.

                                1. 21

                                  I disagree. The C programming language is directly responsible for countless damning flaws in modern software and can be credited for the existence of the majority of the modern computer security industry.

                                  You can write system software in many languages, including Lisp. For a less outlandish example, Ada is specifically designed for producing reliable systems worked on by teams of people in a way that reduces errors at program run time.

                                  I find it amusing to mention UNIX fundamentals as a reason to learn C, considering UNIX is the only reason C has persisted for so long anyway. Real operating systems focused largely on interoperation between languages, not funnelling everything through a single one; Lisp machines focused on compilation to a single language, but that language was well-designed and well-equipped, unlike C.

                                  Last but not least, because C is so “low-level”, you can leverage it to write highly performant code to squeeze out CPU when performance is critical in some scenarios.

                                  It’s actually the opposite. The C language is too high-level to correspond to any single machine, yet too low-level for compilers to optimize for the specific machine without gargantuan mechanisms. It’s easier to optimize logical count and bit manipulations in Common Lisp than in C, because Common Lisp actually provides mechanisms for these things; meanwhile, C has no equivalent to logical count and its bit shifting is a poor replacement for field manipulations. Ada permits specifying the bit structures of data types at a very high level, while continuing to use abstract parts, whereas large C projects float in a bog of text-replacement macros.

                                  Those are my thoughts on why C isn’t worth learning, although this is nothing against the author.

                                  1. 5

                                    Unix-like operating systems aside, are people like D. Richard Hipp or Howard Chu doing it wrong and simply wasting their time, then?

                                    1. 12

                                      Your question implies all or nothing type of answer. They could be making a bad choice in language while otherwise doing great design, coding, and testing. There’s a lot of talented people that attract to C. There’s also sometimes justification such as available time/talent/tooling or just making stuff intended for adoption by C programmers.

                                      What few studies that have been done always showed C programmers were less productive and their code screwed up more. The language handicaps them. More expressive languages with more safety that are easier for compilers to understand are a better solution.

                                      1. 3

                                        The question was rhetorical. I.e. for the aforementioned Howard Chu, C was the obvious and only choice to write LMDB in.

                                        1. 4

                                          Sometimes C simply is the only viable language to write a system in. Databases and programs on microcontrollers with less than 16 KB of ram are such examples, because in those cases every bit of memory counts.

                                          Alltough I would definitely not use C blindly, it is still worth learning. But I do think that it is a bad idea to learn it as your first language.

                                          1. 7

                                            Forth would probably be an even better choice for a microcontroller with less than 16KB of RAM, to be honest …

                                            1. 3

                                              I would argue Ada is just as well suited to microcontrollers with less than 16 kB of RAM – perhaps even more than C is.

                                              1. 3

                                                Only if you can do bitwise operations directly on specific cpu registers. With Ada, these operations are not always available, while C nearly always has them.

                                                They are vital if you want to make a logic output pin high or low.

                                                1. 3

                                                  That is no inherent fault of the language, however - as with much of this discussion, the conflation of language with ecosystem obscures meaning. As I mention above, C is only king of the scrap heap because we’re locked in a vicious cycle of building CPUs that execute C better, building compilers that compiler better for those CPUs, etc. Similarly we have a vicious cycle of C compatibility in software. Everything is compatible with C because C is compatible with everything.

                                                  1. 4

                                                    I’d love to agree with you, but then we would both be wrong. Furthermore, this is a very short sighted opinion.

                                                    First: There is an astonishing amount of CPU’s that are mostly designed to be cheap, fast or power efficient, and those are certainly not designed to execute specifically C programs better. If they are designed towards a specific programming related goal, then they are optimized towards executing the most-used instructions in their specific instruction-set. That is, if they are optimized for anything else than cost at all.

                                                    Second: It doesn’t matter how you design a CPU, somewhere you’ll have to deal with bits and bits are wires which you pull high or low. You’d also have to pull the data off the CPU at some point in time. The simplest, cheapest and most efficient method of doing so, is by directly tying a wire that goes off-chip into some part of a register in the CPU.

                                                    Third: I think that the emergence of C is a consequence of how our technology is built, how it functions and what is most efficient to implement in a scilicon-die and not the other way around. The reason that C is compatible with everything is probably because it is easy to use and implement for everything. I think this is because there is a deep connections between how electronic circuits work, the way that C is specified and the operations you can perform in C.

                                                    I agree with you that the “CPUs are built for C and C is created for CPUs” causation goes both ways, but it is definitely way stronger in the direction of “C is created for CPUs” than the other way around.

                                                    Keep in mind that this article is specifically about C as a systems language, therefore we don’t care about C as a language for applications (In fact, once you are out of the systems doamin, you’d probably be better off using something else). However it will be impossible for certain applications to ignore the functioning of their underlying systems down to their (electro-)mechanical levels (e.g. database systems).

                                                    1. 2

                                                      I’d love to agree with you, but then we would both be wrong.

                                                      That’s entirely unnecessary and serves only as an insult. Please don’t.

                                                      First: There is an astonishing amount of CPU’s that are mostly designed to be cheap, fast or power efficient, and those are certainly not designed to execute specifically C programs better.

                                                      These are effectively orthogonal concerns. In the embedded space, consider the (relative) failure of the Parallax Propeller compared to the AVR family of microcontrollers. Comparable options exist in the two product lines in terms of power usage, cost, and transistor count, but AVR is a fundamentally serial architecture while Propeller requires the use of multiple threads to take advantage of its transistor count efficiently. A language optimized for this does not have widespread adoption in the embedded space, where aside from Ada and a bit of Rust, C is the absolute king. This is almost certainly a major contributing factor in the relative success of AVR over Propeller (in addition to the wider part range and backing from a major semiconductor manufacturer).

                                                      Second: It doesn’t matter how you design a CPU, somewhere you’ll have to deal with bits and bits are wires which you pull high or low. You’d also have to pull the data off the CPU at some point in time.

                                                      And you don’t need C to do either of those things - in fact, you can’t do them in “C”. You need someone to write some machine code at some point to enable you to do those things, and if you can package that machine code up into a C library or compiler intrinsic you can package it up into a Rust or Ada library just as well.

                                                      Third: I think that the emergence of C is a consequence of how our technology is built, how it functions and what is most efficient to implement in a scilicon-die and not the other way around.

                                                      This is potentially possible, but I suggest you take a look at C Is Not A Low Level Language which discusses the vicious cycle of C and CPU better than I can here.

                                                      One reason I don’t think this is true is because there are examples of using existing silicon technology to build non-serial-like computers; GPUs are a huge one, as are FPGAs and other heterogeneous computing technologies. Those fundamentally cannot be programmed like a serial computer, and that makes them less accessible to even many very skilled systems programmers.

                                                      it will be impossible for certain applications to ignore the functioning of their underlying systems down to their (electro-)mechanical levels (e.g. database systems).

                                                      I hope I didn’t imply that there will ever be a point at which “bare metal engineering” isn’t needed. I’m not saying that low level programming is not essential; I’m saying that you can do low level programming without C in principle, and often even in practice.

                                                      1. 2

                                                        That’s entirely unnecessary and serves only as an insult. Please don’t.

                                                        It wasn’t an insult. It was me stating that I’d love to live in a better world in which I could agree with your viewpoint, but also stating that your viewpoint, does not comply with the reality at hand.

                                                        Not everything is, or is meant as, an insult, and you’d be wise to assume nothing is an insult until it undeniably is. Nothing I’ve written so far is an insult, and in fact, I’d rather walk away from a discussion before insults are being made. I won’t waste my time on discussions that serve the purpose of reaffirming ones, or my own, beliefs.

                                                        This is almost certainly a major contributing factor in the relative success of AVR over Propeller (in addition to the wider part range and backing from a major semiconductor manufacturer).

                                                        I disagree. I think that AVR’s success is mostly due to the fact that in the embedded space, interrupts are more important than multi-threading is. Most embedded jobs simply don’t need multiple threads. It’s not the C language, but economics that is to blame.

                                                        And you don’t need C to do either of those things - in fact, you can’t do them in “C”. You need someone to write some machine code at some point to enable you to do those things, and if you can package that machine code up into a C library or compiler intrinsic you can package it up into a Rust or Ada library just as well.

                                                        Ah but here’s the problem. You’d need to write some extra machine code to set bits in a certain register. That extra machine code would require extra cycles to be executed.

                                                        I’d also like to point out that when you are using C, you don’t need the extra machine code at all! In the embedded- or system-space, you can simple look up the address of a register in the datasheet or description of the instruction set, put that number into your program, treat it as a pointer and then read from or write to the address your pointer is referring to.

                                                        So you just don’t need extra machine code in C. You just “input the number and write to that address” in C. This is why it’s king. A lot of other languages simply can’t do that.

                                                        This is potentially possible, but I suggest you take a look at C Is Not A Low Level Language which discusses the vicious cycle of C and CPU better than I can here.

                                                        I’ve read it, but that does not mean that I agree with that viewpoint. I still think that C is a low level language. Mostly because of the “input the address of a register as a pointer and treat it regularly”-approach C has taken. As for the vicious cycle, I’ve stated my thoughts on that in my previous post quite clearly with:

                                                        I agree with you that the “CPUs are built for C and C is created for CPUs” causation goes both ways, but it is definitely way stronger in the direction of “C is created for CPUs” than the other way around.

                                                        One reason I don’t think this is true is because there are examples of using existing silicon technology to build non-serial-like computers; GPUs are a huge one, as are FPGAs and other heterogeneous computing technologies. Those fundamentally cannot be programmed like a serial computer, and that makes them less accessible to even many very skilled systems programmers.

                                                        First of all: GPU’s are multiple serial computers in parallel. It doesn’t matter how you look at it, their data-processing is mostly serial and they suffer from all the nastiness that regular serial computers do when you have to deal with concurrency.

                                                        Second: FPGA’s simply aren’t computers. They are circuits. Programmable circuits that you can use to do computations, but they are circuits nonetheless. Expecting that you can efficiently define circuits with C, is like expecting that you can twist a screw in with a hammer: “You might accomplish your goals, but you will have a crude result or a very hard time”.

                                                        Third: I’ve been making the argument that C is mainly a consequence of how CPU’s work and (mostly, see my above statement that is also in my previous post) not the other way around.

                                                        I hope I didn’t imply that there will ever be a point at which “bare metal engineering” isn’t needed. I’m not saying that low level programming is not essential; I’m saying that you can do low level programming without C in principle, and often even in practice.

                                                        You did give me the impression that you were implying that bare-metal engineering isn’t needed and you confirmed that impression by stating that “you’d just need to write some machine code” to get hardware level access in other languages. The whole point of C was that you simply have (if you know the address that is) your hardware access by just inputting the address of where your register to communicate with the hardware is, without having the need for extra machine code.

                                                        C provides you with a level of abstraction for writing machine code, without needing to know the machine code and without needing extra machine code to accomplish your goals.

                                                        That’s why I think that it is a low level language and why I also think that it is still worth learning as a systems language.

                                                        PS: I am by no means a fan of C, but I do am a fan of using the right tool for each problem as it makes your life, and the problem much easier. In the (embedded) systems world, I think that C is often simply the right tool to use.

                                                        1. 1

                                                          So you just don’t need extra machine code in C. You just “input the number and write to that address” in C. This is why it’s king. A lot of other languages simply can’t do that.

                                                          Ada does it better. Ada has attribute for Address, Size, and also permits giving specific meaning to its enumeration types. I’ve never used all of this with Ada, but I believe it would look like this:

                                                          declare
                                                             type Codes is (This, That, Thus);
                                                             for Codes use (This => 1, That => 2, Thus => 17);
                                                             for Codes'Size use 8;
                                                             Register : Codes := This
                                                                with Address => 16#0ABC#;
                                                          begin
                                                             ...
                                                          end;
                                                          

                                                          So, this is a high-level, type-safe, and simple way to do what you just described, but you usually won’t need to do this and so suffer none of the drawbacks.

                                                          C is worse than useless, because it deceives people such as yourself into believing it has any value whatsoever or is otherwise at all necessary.

                                                          1. 1

                                                            Nice! I didn’t know about this.I’ll definitely look into Ada more when I get the chance.

                                                            C is worse than useless, because it deceives people such as yourself into believing it has any value whatsoever or is otherwise at all necessary.

                                                            And yet I still disagree with you here. There’s are reasons why C is king and why Ada isn’t.

                                                    2. 2

                                                      Which is why Ive encouraged authors of new languages to use its data types and calling conventions with seemless interoperability. It took decades for C to get where it is. Replacing it, if at all, will be an incremental process that takes decades.

                                                      Personally, I prefer just developing both new apps in safer languages and compiler-assisted security for legacy C like Softbound + CETS. More cost-effective. Now, C-to-LangX converters might make rewrites more doable. Galois is developing a C-to-Rust tool. I’m sure deep learning could kick ass on this, too.

                                                      1. 1

                                                        I think it’s not realistic to assume that C will be replaced anytime soon, not even in decades. C will still be around, long after Rust has died.

                                                        I also think it’s a pipe dream to assume that other programs can transform C-programs into some safer language while still preserving readability and the exact same behaviour. What you describe has been studied and is known in scientific literature as “automatic program analysis” and is closely related to the halting problem, which is undecidable. This technology can certainly make many advances, but ultimately it is doomed to fail on a lot of cases. We’ve known this since the the 1960’s. When it fails, you will simply need knowledge about how C works.

                                                        Furthermore: Deep learning is akin to “black magic” and people simply hate any form of “magic”. At some point you want guarantees. Most traditional compilers give you those because their lemma’s and other tricks are rooted in algebra’s that have been extensively studied before they are put into practice.

                                                        1. 1

                                                          “I think it’s not realistic to assume that C will be replaced anytime soon, not even in decades. C will still be around, long after Rust has died.”

                                                          I agree. There will probably either always be more C than Rust or that way for a long time.

                                                          “ it’s a pipe dream to assume that other programs can transform C-programs into some safer language while still preserving readability and the exact same behaviour. “

                                                          There’s already several projects that do it by adding safety checks or security features to every part of the C program with risk that their analyses can’t prove safe. So, your claim is already false. The research is currently focused on further reducing the performance penalty (main goal) and covering more code bases. Probably needs a commercial sponsor with employees that stay at it to keep up with the second goal. They already mostly work with many components verifiable if anyone wanted to invest the resources into achieving that.

                                                          “Deep learning is akin to “black magic” and people simply hate any form of “magic”. At some point you want guarantees. “

                                                          Sure they do: they’re called optimizing compilers and OS’s. They trust all the magic so long as it behaves the way they expect at runtime. For a transpiler, they could validate it by eye, with test generators, and/or with fuzzing against C implementation comparing outputs/behavior. My idea was doing it by eye on a small pile of examples for each feature. Once it seems to work, add that to automated test suite. Put in a bunch of codebases with really different structure or use of the C language, too.

                                                          1. 1

                                                            Sure they do: they’re called optimizing compilers and OS’s. They trust all the magic so long as it behaves the way they expect at runtime. For a transpiler, they could validate it by eye, with test generators, and/or with fuzzing against C implementation comparing outputs/behavior. My idea was doing it by eye on a small pile of examples for each feature. Once it seems to work, add that to automated test suite. Put in a bunch of codebases with really different structure or use of the C language, too.

                                                            • In a lot of area’s, comparing it by eye and testing with fuzzers is simply not going to fly (sometimes in the most literal sense of the word fly).
                                                            • An automated test suite with tons of tests can also slow development down. I’m all for tests, but I am against mindlessly adding a test for each and every failure you’ve encountered.
                                                            • What operating systems do is explainable with relative ease. What most deep-learning systems do is not. If you want guarantees for 100% of all cases, deep learning is immediately out of the picture.

                                                            The research is currently focused on further reducing the performance penalty (main goal) and covering more code bases.

                                                            Herein lies the problem. These tools cover some, but not all codebases. We have known for almost a century that a tool that covers all possible codebases is impossible to construct. See the “Common pitfalls” section on the halting problem on Wikipedia for a quick introduction. You will see that my argument is not false and will still hold, and that means that it is still useful to learn C (which is the main topic under discussion here).

                                                            1. 2

                                                              The by eye, feature by feature testing, and fuzzing in my comment were for the transpiler’s development, not normal programs. You were concerned about its correctness. That would be my approach.

                                                              I’m not buying the halting problem argument since it usually doesn’t apply. It’s one of most overstretched things in CompSci. The static analyses and compiler techniques work on narrow analyses for specific traits vs the broader goal halting problem describes. They’ve been getting lots of results on all kinds of programs. If the analysis fails or is infesible, the tools just add a runtime check for that issue.

                                                              1. 1

                                                                The by eye, feature by feature testing, and fuzzing in my comment were for the transpiler’s development, not normal programs. You were concerned about its correctness. That would be my approach.

                                                                Formal verification is the route the real language and compiler-development teams take (See clang and ghc for example). Fuzzing is something they use, but usually as an afterthought.

                                                                I’m not buying the halting problem argument since it usually doesn’t apply. It’s one of most overstretched things in CompSci. The static analyses and compiler techniques work on narrow analyses for specific traits vs the broader goal halting problem describes. They’ve been getting lots of results on all kinds of programs. If the analysis fails or is infesible, the tools just add a runtime check for that issue.

                                                                Fair enough, but I’d still like to point out that throwing the “halting problem”-argument, because it usually doesn’t apply, and stating “we have to make sure something works on all kinds of codebases”, are two polar opposites of reasoning methods.

                                                                If you are reasoning like this: “Okay, we know this is impossible because of the results Turing provided us about the halting problem, but lets see how close we can get to perfection”, or “lets see if we can build something usefull for 80% of cases”, then I approve of the approach and then I’ll agree. In this case you probably would also agree with me that there is still value in learning C as a systems language.

                                                                But if your reasoning is along the following lines: “Look this works on nearly all codebases practice and therefore we don’t have to learn C as a systems language”, then you are just simply dead wrong. It’s the last 5 or 10% that where algorithms, ideas and projects fail, and not the easy first 80%.

                                                                You really require Feynman’s kind of “kind of utter honesty with yourself” when discussing these kinds of topics, because it is very easy to fool yourself into believing in some favourable picture of an ideal where technology or skill x is not needed any more.

                                                                1. 1

                                                                  “Formal verification is the route the real language and compiler-development teams take (See clang and ghc for example).”

                                                                  They don’t use mathematical verification for most compilers. Only two that I know of in past few years. I’m not sure what V&V methods most compiler teams use. I’d guess they use human review and testing. Maybe you meant formal as in organized reviews. What you said about fuzzing is easily proven true given all the errors the fuzzers find in… about everything.

                                                                  “ stating “we have to make sure something works on all kinds of codebases””

                                                                  You keep saying all. Then, you argue with your own claim like I said it. I said “Put in a bunch of codebases with really different structure or use of the C language, too.” As in, keep testing it on different kinds of code bases to improve it’s general applicability. We don’t have to eliminate all C or C developers out there. That I thought so was implied by me advocating compiler techniques for making remaining C safer.

                                                                  “or “lets see if we can build something usefull for 80% of cases”, then I approve of the approach and then I’ll agree. In this case you probably would also agree with me that there is still value in learning C as a systems language.”

                                                                  Basically that. Except, like HLL’s vs C FFI’s, I’m wanting the number of people that need that specific low-level knowledge to go down to whatever minimum is feasible. People that don’t need to deal with internals of C code won’t need to learn C as a systems language: just how to interface with it. People that do need to rewrite, debug, and/or extend C code will benefit from learning C.

                                                                  If a competing language succeeds enough, it might come down to a tiny number of specialists or just folks cross-trained in that which can handle those issues. Much like assembly, proof engineering, and analog are today.

                                                                  “You really require Feynman’s kind of “kind of utter honesty with yourself” “

                                                                  I had to learn that lesson about quite a few things. My claims have quite a few qualifiers with each piece proven in a field project. I’m doing that on purpose since I’ve been burned on overpromising in languages, verification, and security before. I don’t remember if I read the essay, though. It’s great so far. Thanks for it.

                                                                  1. 2

                                                                    I’m wanting the number of people that need that specific low-level knowledge to go down to whatever minimum is feasible.

                                                                    I fully agree with that goal, but I question whether or not you are still at a “systems-level” when you can ignore the specific low-level knowledge.

                                                                    People that don’t need to deal with internals of C code won’t need to learn C as a systems language: just how to interface with it.

                                                                    I guess, our whole argument boils down to my “belief” that if you are only interfacing with C, you probably have left the systems-domain behind already, because you are past the level where you need to be aware of what your bits, registers, processors, caches, buffers, threads and hard drives are doing.

                                                                    If you are on that level, then I totally agree with you that you should use something else than C, unless your utility is run millions of times per day all around the world.

                                                                    I’m doing that on purpose since I’ve been burned on overpromising in languages, verification, and security before.

                                                                    What I want you to take away from this discussion is something similar: You should not “over-blame” C for all kinds of security vulnerabilities (amongst other issues). I agree that the language has certain aspects that make it more inviting to all kinds of issues. In fact I even dare to go as far as to state that C is a language that not just “invites” issues, but that it almost “evokes” those issues.

                                                                    However I also think that the business processes that cause the issues and vulnerabilities which are often attributed to C, are an even bigger (security) problem than C in and of itself is.

                                                                    I don’t remember if I read the essay, though. It’s great so far. Thanks for it.

                                                                    You’re welcome! I’m glad you like it.

                                                                    I’ve also posted the essay as a story. I was surprised that it wasn’t already on here.

                                                                    1. 2

                                                                      re low level knowledge

                                                                      I think I should be more specific here. We are talking about systems languages. The person would need to understand a lot of concepts you mentioned. They just don’t use C to do it. If anything, not using C might work even better since it targets an abstract machine that’s sort of like current hardware and in other ways (esp parallel stuff) nothing like it. Ada w/ ParaSail or Rust with its parallelizers might represent the code in a way that keeps systems properties without C’s properties. So, they still learn about this stuff if they want things to perform well.

                                                                      From there, they might need to integrate with some C. That might be a syscall. That might be easy to treat like a black box. Alternatively, they might have to call a C library to reap its benefits. If it’s well-documented w/ good interface, they can use it without knowing C. If it’s not, they or a C specialist will have to get involved to fix that. So, in this situation, they might still be doing systems programming considering low-level details. They just will need minimal knowledge of C’s way of doing it. That knowledge will go up or down based on what C-based dependencies they include. I hope that makes more sense.

                                                                      re business processes

                                                                      The environment and developers are an important contributor to the vulnerabilities. That C has enough ways to trip up that even the best developers do make me put more blame on its complexity in terms of dodging landmines. I still put plenty blame on the environment given quality-focused shops have a much lower defect rate. Especially if they use extensive tooling for bug elimination. You could say a major reason I’m against most shops using C is because I know those processes and environments will turn it into many liabilities that will be externalized. Anything that lowers number of vulnerabilities or severe ones in bad environments can improve things. At the least, memory-safe languages turn it from attackers owning your box to just crashing it or leaking things. Some improvement given updates are easier if I still control the box. ;)

                                        2. 12

                                          Your reasons to not learn C are… mostly irrelevant. You may not think unix is a “real” operating system, but it’s the most widely OS family outside of seriously low-powered embedded stuff. Nobody programs for lisp machines. You could write ada, sure, but I don’t imagine there’s a vibrant open source library ecosystem for it.

                                          In your preferred world, where everyone uses lisp machines or what you consider “real” operating systems, you are right, but we’re not in that world. If people are going to continue using unixes, learning C will continue to be worthwhile even if you prefer writing most new code in something better like rust or ada or lisp.

                                          1. 3

                                            You could write ada, sure, but I don’t imagine there’s a vibrant open source library ecosystem for it.

                                            I’m assuming you mean that this vibrancy exists for C libraries? Which makes the statement odd, because calling a C library from Ada is trivial.

                                            (And no, you don’t lose all the advantages of Ada by calling libraries written in C. Any argument to that effect is so off the mark I would have trouble responding to it.)

                                            1. 2

                                              I wouldn’t say you lose all the advantages of Ada by calling C libraries. However, if you’re calling C libraries, you should have at least a rudimentary understanding of C, right? If anything, using C libraries from Ada is a big argument in favor of learning C, isn’t it? Not because you necessarily should use C in a project, but just because you need to be able to read the documentation, know how to translate a C function signature into a call to that function from Ada, know what pitfalls you can fall into if you’re calling the C code incorrectly, know how to debug the segfaults you’ll inevitably encounter, know how to resolve linker issues, etc. Maybe you’ll even have to write a tiny wrapper in C around another library if that library is particularity happy to abuse macros or if you need to do something stupid like use setjmp/longjmp to deal with errors (like libjpg requires you to do).

                                              1. 1

                                                Sure, but “C still deserves learning once you know Ada” is rarely how it goes.

                                                1. 3

                                                  I don’t know what previous experience you have with being coerced to learn C but the article we’re commenting on here just said that modern C is very different from old C, that knowing C is really useful to be able to study the vast amount of open source C code in your operating system (assuming that’s something unixy), and that there may be times when you want to write something in C for the performance. I’d say “C still deserves learning once you know Ada” is perfectly consistent with those points.

                                                  I’ll admit you do have a point regarding the performance thing; if you know Ada, you may not need C to write performant code. However, I’d bet it’s vastly easier to use a C library from, say, Python, than it is to use an Ada library from Python, so even if you know Ada, writing performance-critical code in C still makes sense in certain fairly common circumstances.

                                                  1. 3

                                                    The biggest difficulty with using Ada libraries from Python is that you have two languages with expressive type systems talking through a third one without, since the OS API is that of C. You have to either come up with a way of encoding and decoding complex data, or reduce the API of the library to the level of expressiveness afforded by C.

                                                    To study old code, knowing modern C is not enough, so the point about modern C becomes moot there.

                                                    Also, this is a shameless plug, but I have a real life demonstration of what calling C and assembly from Ada actually looks like: Calling C:https://github.com/dmbaturin/hvinfo/blob/master/src/hypervisor_check.adb#L108-L121 Calling x86 assembly: https://github.com/dmbaturin/hvinfo/blob/master/src/hypervisor_check.adb#L22-L36

                                                2. 1

                                                  If anything, using C libraries from Ada is a big argument in favor of learning C, isn’t it?

                                                  Perhaps, but you’re advocating that people should learn more modern languages first and do the majority of their programming in those languages, which is not what /u/nanxiao suggested at all.

                                                  1. 2

                                                    Isn’t it though? The article is saying you can squeeze out some extra performance from C, that C really isn’t as bad as it used to be, and that being able to read the source of your operating system and tools is beneficial; “do most of your programming in your preferred language, but know how to read and write C when that turns out to be necessary” seems perfectly consistent with that, doesn’t it?

                                                    1. 2

                                                      I suppose so; however, the post seems to be mostly engaging with users of the Rust and Go languages, which are cast by the author as direct competitors to C (their success causing it to be “ignored”).

                                                      In any case I definitely think this is the right approach.

                                                  2. 1

                                                    “However, if you’re calling C libraries, you should have at least a rudimentary understanding of C, right?”

                                                    Nope. The whole point of putting it behind a function call, aka abstraction, is not understanding the internals. Instead, you should just know about the safety issues (i.e. responsible use), what data goes in the function call, and what is returned. You should just have to understand the interface itself.

                                                    The other stuff you mentioned is basically working with the C code inside. That requires knowing about C or at least assembly.

                                                    1. 3

                                                      I don’t think that’s the only interpretation. If you have to build the abstraction at the ffi level yourself, then you generally want to be familiar with C. In many cases, the docs aren’t good enough to cover all of the cases where UB is expressed through the API, so you wind up needing to read the source.

                                                      1. 1

                                                        True in practice.

                                              2. 3

                                                Lisp machines focused on compilation to a single language, but that language was well-designed and well-equipped, unlike C.

                                                We are back in the mainframe age where you can use whichever language you please on the server side. Before we write an alternate history where lisp machines won, how many major players have succeeded with lisp and stuck with it? Biggest I remember was Reddit, and I’m pretty sure they eventually re-wrote everything in Python.

                                                Would also add that as much as I enjoy lisp, I’d never even consider it for something that had the potential to get big. If I have to start dropping newbies into my code, I want types and compile-time warnings.

                                                1. 4

                                                  how many major players have succeeded with lisp and stuck with it?

                                                  ITA famously uses Lisp.

                                                  There are attempts to catalog production uses of Lisp, but they’re almost certainly out of date, incorrect, and don’t highlight companies anyone has really heard of.

                                                  There’s no intention in arguing, just providing the list that I know of in direct answer to your question.

                                                  1. 4

                                                    If I have to start dropping newbies into my code, I want types and compile-time warnings.

                                                    Common Lisp has types and compile-time warnings. It has the ability to treat warnings as errors, too. it’s a pretty great language considered.

                                                    1. 1

                                                      I remember there was some macro to annotate a parameter for type checking, but its behavior was left unspecified by the standard. Has that blank been filled in since then? Or is there another facility that I’ve overlooked entirely?

                                                    2. 3

                                                      Look at Franz and LispWorks customer pages for answer to that. Quite a range of uses. Most seem to like its productivity and flexibility. It’s definitely a tiny slice of laguage users, though.

                                                  1. 3

                                                    I’m more interested in the ways people pronounce “–” as in “–verbose”.

                                                    1. 4

                                                      “dash” “dash”, or “dash” for those odd single - flags.

                                                      1. 3

                                                        Oh, so I’m not the only one. Many people around me call it “minus”.

                                                    1. 5
                                                      • /etc — et see
                                                      • /lib — lib, as in “liberal”
                                                      • char — char, as in “char broiled”
                                                      • fsck — fuh sk, as in “fun ski”
                                                      • schema — scheme uh, scheme uhs (as in a ruse, or the programming language skeem)
                                                      1. 17

                                                        you’ll likely need to uninstall npm manually (or at least rimraf the .git directory inside it)

                                                        (my emphasis)

                                                        Heh, never seen that verbing before.

                                                        1. 2

                                                          Well, get used to it! ;-) https://www.npmjs.com/package/rimraf

                                                          1. 3

                                                            …why does this exist??

                                                            1. 2

                                                              I mean, code to recursively delete a directory has to be somewhere… it’s not in the Node stdlib so it’s in an npm module. It exists in /bin/rm too, but not everyone’s on a Unix system and shelling out is ugly anyway.

                                                          2. 2

                                                            Which? rimrafing? I have said it out loud but I haven’t ever typed it.

                                                            1. 2

                                                              How do you say fsck? I’ve always said “eff-sick”.

                                                              1. 2

                                                                “eff-sick” also makes sense but I always said “eff ess see kay”.

                                                                1. 2

                                                                  I pronounce it “eff ess chck” - the last like “check” but squished (if that makes sense). I think I’ve heard people pronounce it letter-by-letter before too.

                                                                  1. 1

                                                                    If I’m speaking formally (like to a customer), I say “fs-check”, but when I’m just saying it, it’s “eff-sick”. I dunno why.

                                                                    1. 1

                                                                      That’s fascinating. Language is so interesting!

                                                                      To clarify, when you’re speaking formally, you pronounce “fs” as individual letters?

                                                                  2. 1

                                                                    Sick? I’ve always heard eff-sack!

                                                                    1. 1

                                                                      fuh-sk

                                                                      1. 1

                                                                        For f***’s sake!

                                                                        1. 1

                                                                          I tend to go for “fisk” on that one.

                                                                          I’d love to see a little survey of things like this (how do you pronounce “/etc”? “/lib”? etc.). A “UNIX As She Is Spoke”, perhaps.

                                                                    1. 10

                                                                      I’m not entirely sure what to conclude from this article.

                                                                      1. 12

                                                                        That Charles Moore is a unique person with unique abilities. We can learn from him, but we cannot be him.

                                                                        1. 9

                                                                          We’re also in a slightly different era.

                                                                          We’re so hellbent on seeing software engineering as a ‘team’ activity to the point that we’d rather cripple the expressive capabilities of languages so as to make it easier to scale teams up. This isn’t wrong, but it isn’t necessarily right, either.

                                                                          1. 2

                                                                            This dilemma could be easily solved with a training budget ;).

                                                                        2. 7

                                                                          That you should Be Moore Like Chuck.

                                                                          1. 2

                                                                            This (although I agree with the sentiment that most software is bloated, even though I think many OO languages suffer from this problem more than C). I’m also not sure what “1% the code” means.

                                                                            1. 2

                                                                              It’s a literal 1% of the amount of code.

                                                                              1. 3

                                                                                I understand that, but the whole post is a bit vague. Shouldn’t it be “1% of the code”? Or is it used in an imperative mood? And what is the statement? “1% the code” is not a statement. I’m not sure what he’s claiming either. That he can implement C code in 1% of the line count in Forth? That obviously doesn’t hold for all C code.

                                                                                So, I do understand the “code is bloated” sentiment, but fail to take away any concrete points from the article.

                                                                                1. 3

                                                                                  He’s saying that if you build an application in C, he can build the same application with 1% of the code, as measured in total source-code bytes between the NAND gates (i.e. including the compiler, linker, operating system, etc).

                                                                                  1. 0

                                                                                    Thanks. I was wondering for two reasons:

                                                                                    1. He only implies this claim, and never really phrases it
                                                                                    2. It’s obviously wrong. I’d like to see him do
                                                                                    int main(int argc, char *argv[]) { printf("Hello, world!\n"); return 0; }
                                                                                    

                                                                                    in 0.73 bytes in Forth.

                                                                                    Now, of course this is a trivial program, but this still means that the claim is empty (a “true Scotsman”-like claim). The same goes for anyone who’s claiming that it’s impossible to write a 100% secure program (or bug-free, for that matter). When I oppose with an example with this, people usually refine to “it’s impossible to write a nontrivial program that is perfectly secure”. That is something that I can’t deny, because it’s so subjective. If you define a program as trivial as long as it has no bugs, you’re certainly right.

                                                                                    (BTW, I don’t deny that it can be unfeasible to make a sufficiently complicated program bug-free. That is something completely different than “there can be no complicated bug-free program”. I also consider bugs caused by the underlying infrastructure, whether software or hardware, to be not in the program. The CompCert is a practical example of a program which is well underway to be bug-free. I’d argue that a compiler for a simpler language, which less back-ends, would be non-trivial and could be completely bug-free.)

                                                                                    1. 6

                                                                                      It’s obviously wrong. I’d like to see him do … in 0.73 bytes in Forth.

                                                                                      You’re only counting a fraction of the C code, not the C compiler, the C standard library, or the operating system. You have to count them because you might have to debug them.

                                                                                      Also note his challenge is designed to be fair: But I’m game. Give me a problem with 1,000,000 lines of C. But don’t expect me to read the C, I couldn’t. And don’t think I’ll have to write 10,000 lines of Forth. Just give me the specs of the problem, and documentation of the interface.

                                                                                      It’s very easy to build contrived examples in order to miss the point, but this is only fun when we’re young. There’s real value in trying to understand what Chuck discovered, and if you can puzzle it out to understand the way that Forth can be 100x more productive than C, you’ll probably write better C as well.

                                                                                      1. 2

                                                                                        Oh sorry, I totally misread your last comment. I thought it said excluding the compiler, linker, OS, etc. Now it starts to make more sense, and I finally get his point. But hey, my point was (like codey’s and alexandria’s) that the post is vague: Your explanation is a lot clearer and shorter than his post.

                                                                                      2. 1

                                                                                        .( Hello, world!)

                                                                                        While obviously not literally 1% of the code, I think this shows that the claim is not empty, but in fact supports his point. 4 bytes of non-data code vs 59.

                                                                                        1. 0

                                                                                          As I explained above, I think the claim he’s making is vague. If the claim was that the size of an arbitrary C program can be reduced to 1% of it using Forth, then my example shows that this is obviously not true. Even if you cheat by counting only non-data code, it’s still off by a factor of 6. As geocar explained, this is not the claim: He counts code in the OS, compiler, linker, etc. as well, which makes his claim much more plausible, since these are typically huge swaths of code.

                                                                                          So my point is, if you have a point, state it clearly and don’t make any invalid claims just because they support your point.

                                                                                          1. 2

                                                                                            You are taking the concept far too literally, please stop being obtuse.

                                                                            1. 3

                                                                              Mixture of work and not-work: learning about all the weird aspects of Java serialization.

                                                                              I also had a really cute idea for an esolang, but it’s more of a big picture idea than something I can implement yet.

                                                                              1. 1

                                                                                I also had a really cute idea for an esolang, but it’s more of a big picture idea than something I can implement yet.

                                                                                Curious if you’d elaborate!

                                                                                1. 3

                                                                                  Sure: I want to write a language that’s based on running some other type of command that can produce errors, and using the errors to control or produce the behavior of the program.

                                                                                  A straightforward way to do this might be using exit codes of commands to generate source code that you then run, but I’d really love to do something based on compiler errors from a different programming language.

                                                                                  1. 2

                                                                                    Oh… do I have ideas here. This is incredibly interesting! Any plans to actually spend time on this? I don’t want to step on your toes…

                                                                                    1. 1

                                                                                      I’d like to spend time on it, but I really don’t know. I have several other projects in various states of active work/neglect. I say go for it if you’re interested: there’s probably more than one way to make something out of the bare idea.

                                                                                      1. 1

                                                                                        Fair enough! My idea is effectively a commentary on error handling in Go. I’ll see if I can find some time to play with it.

                                                                              1. 3
                                                                                • Business: Working on my deconstruct talk! First draft is done, but need to finish getting data and do some rehearsals to figure out the second draft.
                                                                                • Personal: COBRA is about to expire, gotta figure out health insurance :(
                                                                                1. 2

                                                                                  gotta figure out health insurance :(

                                                                                  This is the last thing that someone in the US should have to worry about. :(

                                                                                1. 1

                                                                                  tom7.org is amongst my favorites, and has been for many years at this point.

                                                                                  1. 5

                                                                                    I am curious why v.c isn’t in the repo, but on the web site. Seems odd… perhaps a thing that’s meant to be addressed at some point, but low priority…

                                                                                    1. 1

                                                                                      It’s 15k lines of code with constant changes, it’s not really supposed to be in the repo. It’s more like install.exe :)

                                                                                      1. 6

                                                                                        It should definitely be versioned in one way or another to avoid versions confusions. Having a record of changes and also precise versions that you can point people to is essential for packaging. You can put it in a different repo if you don’t want to spoil the history.

                                                                                    1. 15

                                                                                      This article piqued my interest because I work on analytics and search in a microservices environment (separate issue), so moving data out of different services to use is a big part of being able to do anything. We use kinesis but find it clunky because it’s difficult to test with locally since you need a kinesis client library and dynamodb to run your code. Kafka on the other hand could just be run in docker and offers many clients for different languages, so it appears more simple to run in a development or testing context. Now AWS has a managed Kafka service, so Kinesis doesn’t offer much more in terms of devops simplicity anymore. Why not switch to Kafka?

                                                                                      After reading this article, I didn’t get any insight as to why Kafka is too complicated The premise for not needing Kafka here was not a technical one but rather a business one. They asked, why even track this IoT data at all if it doesn’t have business value. I think that’s a valid question, but it has nothing to do with Kafka. You could say, “You don’t need Kinesis (or RabbitMQ, or whatever transport to move data)”. If we have no choice but to accept the business choice of collecting this data and we need to transport it, do we still not need Kafka? Is there a more simple alternative? Would love to hear other peoples’ thoughts on this.

                                                                                      So to me this article seemed more like a snarky critique of WeWork’s use of IoT data in business rather than the pros/cons of Kafka.

                                                                                      1. 6

                                                                                        So to me this article seemed more like a snarky critique of WeWork’s use of IoT data in business rather than the pros/cons of Kafka.

                                                                                        While the author pokes fun at We, the thing I got out of it is that companies should think twice about their technology stack, and probably choose boring technology. Telling, that so many people got different things out of it…

                                                                                        1. 2

                                                                                          As for using Kinesis and DynamoDB in testing environments, you can use Kinesalite and Dynalite instead.

                                                                                          1. 2

                                                                                            Yes was a bit clickbaity but tbh way more insightful than I thought would be.

                                                                                            1. 3

                                                                                              Agreed. At least the title and example held all the way through. Maybe a better title would be “WeWork doesn’t need Kafka, and neither do you.”

                                                                                          1. 10

                                                                                            I’ve been thinking about this a bit as of late as well, as I’m working on some open source programs that I also want to offer as a service, roughly similar to Drew’s SourceHut.

                                                                                            For this, the GPL probably makes more sense. I don’t want to stop anyone from running their own copy of my software, and I don’t mind of they offer it as a service, but I would mind if someone would take the source code, make a few modifications, and then offer that as a service. It’s taken me some time to warm up to this, because I also don’t like restricting people’s freedom and am not a huge fan of GNU/FSF/RMS, but I’ve slowly warmed to the idea that the GPL is a better fit for this project.

                                                                                            For most of my other projects this is not really an issue. For example I recently did some work on a commandline Unicode database querying tool. It’s pretty useful (for me, anyway), but I don’t think anyone is going to add proprietary extensions to this; there’s simply no reason to. The simpler MIT seems like a better fit for this. Even if someone would use it in a proprietary context, I have nothing to lose by it, so why not allow it?

                                                                                            It seems like a “right tool for the job” kind of thing to me.

                                                                                            1. 24

                                                                                              You’d want the AGPL in your case, then, which is designed for network services like SaaS.

                                                                                              1. 2

                                                                                                This article explicitly states that the AGPL does not address the problem of SaaSS (Service as a Software Substitute), as the FSF/GNU call it:

                                                                                                https://www.gnu.org/licenses/why-affero-gpl.html

                                                                                                I otherwise agree—it is designed for software accessed over a network, and is appropriate for this case; it’s just the “SaaS” part I’m commenting on.

                                                                                                1. 3

                                                                                                  I am aware of this stance, but thanks for pointing to it. It is of course true that a SaaS company may process the data in a way that doesn’t provide the freedoms the AGPL attempts to preserve—I just don’t have a better option to suggest. :(

                                                                                              2. 6

                                                                                                Exactly! I posted my licensing philosophy in another thread recently and it’s basically this.

                                                                                                For libraries (which most of my projects are), I do not want a large license, I do not even want any copyright, my sentiment for libraries is very strongly “I’m throwing this crap out there, do whatever the hell you want.” I used to use the WTFPL, but then switched to the more serious Unlicense. But if I were to pick my preferred license for this now, it would be 0BSD :) For end-user apps, I have no problem with copyleft, the one Android app I made a while ago is under the GPL even.

                                                                                                Also to expand on this: if someone uses, like, my http library in a proprietary project, I don’t see it as a corporation exploiting my work, I see it as my work helping another worker do their job.

                                                                                                1. 5

                                                                                                  I quite like the Blue Oak Model License for a permissive license and have started using it in my open source projects (where I have sole copyright and can license/relicense as I please). Compared to the 0BSD license, it discusses patents (to protect all contributors from liability in case any other contributor enforces a patent they own now or later) and there’s also no required copyright line and dates to keep up to date. It’s odd to me that the 0BSD license would remove the need to include the copyright attribution to gain the license, while still including the copyright line at all.

                                                                                                  1. 1

                                                                                                    Last time I saw it, it looks interesting. But I wonder if it is reviewed by other lawyers. Also, does not seem to be OSI/FSF/…-approved yet?

                                                                                                    1. 2

                                                                                                      It’s not, but not due to some issue with the license. The authors are less-than-endorsing of OSI and haven’t applied for approval: https://writing.kemitchell.com/2019/05/05/Rely-on-OSI.html

                                                                                                      1. 1

                                                                                                        That’s unfortunate. I work on a package manager and we aren’t lawyers (or have money to pay), so we default to “whatever DFSG/OSI sees as OK, we do too”

                                                                                                        1. 4

                                                                                                          The Blue Oak Council specifically set out to create a permissive license list first: https://blueoakcouncil.org/list

                                                                                                          The Model License came about due to a lack of the desired qualities in many of the other licenses available for public usage, but it wasn’t the original goal of the project.

                                                                                                          Maybe consider licenses on that list, or parts of the list?

                                                                                                          I also recommend reading the blog post I linked above, because blindly accepting whatever OSI approves will likely not end up well for whoever is accepting the license or that policy.

                                                                                                  2. 1

                                                                                                    You really need patent protection in the license to reduce risk of patent trolling. That’s a huge problem. Most permissive licenses pretend patent law doesn’t exist.

                                                                                                  3. 3

                                                                                                    How does GPL prevent someone from modifying your code and then offering it as a service? The modified code is not being distributed. This is key to the business model of Facebook, Google, Amazon etc and why Reddit changed their license.

                                                                                                    1. 3

                                                                                                      It doesn’t, and that’s okay. But it does prevent people from using my code with their own proprietary extensions without contributing their changes back (the AGPL does, at least).

                                                                                                    2. 4

                                                                                                      I have nothing to lose by it, so why not allow it?

                                                                                                      What people often miss is that applications and libraries in a GPL ecosystem protect each other from patent trolls, tivoization, and, partially, SaaS/cloudification.

                                                                                                      How does the “herd immunity” develops? In the same way companies create large patent portfolios as a legal shield/weapon: if bad actor enters litigation against one project/product/patent it can be sued regarding others.

                                                                                                      1. 7

                                                                                                        There are plenty of other licenses that discuss and protect against patent trolls. Off the top of my head:

                                                                                                        • MPL 2.0
                                                                                                        • CDDL
                                                                                                        • Apache 2
                                                                                                        • Blue Oak Model License
                                                                                                        1. 1

                                                                                                          I did not claim GPL is the only one. Also, some of those do not protect against tivoization or have other issues.

                                                                                                          1. 2

                                                                                                            But do you really care about, say, tivoization, if you wanted to use a more permissive or less copyleft license than the GPL? Patent protection is important for all licenses. Tivoization protection is not.

                                                                                                      2. 1

                                                                                                        There is a deep cultural difference between the Open Source crowd and the Free Software crowd. Open Source crowd says “Right tool for the right job” and the Free Software crowd says “Right tool for the right society”. These are different points of view at a very fundamental level. Open Source people believe in it because they think it makes better software. Free Software people aren’t concerned with making “better quality software”. They think it’s good to make better software, but acknowledge that proprietary might be better in many cases. But that’s not the point of Free Software to them. Free Software people view the GPL as a social hack, not an end in itself.

                                                                                                        1. 2

                                                                                                          Free Software people aren’t concerned with making “better quality software”.

                                                                                                          Citation needed.

                                                                                                          1. 1

                                                                                                            To be more specific, Free Software people don’t view “better quality software” as the end goal. Freedom is the end goal.