1. 3

    Go if it doesn’t have to execute in a browser and the runtime is not a hindrance (and it usually isn’t).

    C, C++, or Rust if a more minimal runtime is needed. Which is/are picked likely depends on what libraries are available and needed.

    C and C++ are non-starters if the application must talk to the internet. I consider it malpractice to write internet-facing software in languages with manual memory management, given how hard we know it is to do correctly. (edit: I’m referring to building new software from scratch, not maintaining the existing software we rely on)

    I’d find something else to work on if the problem lies outside of these domains.

    1. 4

      C++ has automatic memory management with smart pointers. In fact, most of the languages that are garbage collected behind the scenes have runtimes written in C++ to manage memory.

      Is it malpractice to be using operating systems, language runtimes, and other software mostly written by C and C++?

      The real malpractice is proprietary software if anything. Or the fact that we have allowed private companies to install backdoors in all our computer systems.

      1. 6

        See the edit (I assume you replied before seeing it).

        C++ smart pointers still have safety issues. They do not prevent use-after-free bugs. They do not prevent data races (admittedly, Go programs can have data races as well, with all of the same safety concerns). And they don’t do anything about the other large uncharted areas of the C and C++ standards labeled Undefined Behavior.

        It’s important to minimize the existence of this unsafe code if we care about secure systems, particularly for internet-facing applications. Some higher-level languages that promise memory and type safety do have runtimes built in unsafe languages, or use the unsafe features of their host language, and failure to keep invariants has caused real issues with these runtimes and code generation in the past. However, these are much smaller and relatively easy to audit for correctness, than auditing an entire ecosystem of C code.

        1. 1

          I guess it depends on who we are talking about when we say “we care about secure system”.

          If we are talking about regular people: I would argue social engineering (phishing) and backdoors from government agencies via corporations are a bigger concern than random hackers on the internet.

          If we are talking about tech employees: I would argue social engineering and backdoors from government agencies are a bigger concern than random hackers on the internet.

          Doesn’t matter how safe your system is memory-wise if it is weakly engineered from a human perspective.

          I watched a great presentation that showed how to replicate the FastPass UI within the browser at a pixel perfect level. FastPass can be written in the most secure language, but it’s UI is flawed.

          1. 1

            I’m not sure I agree with you. A chain of zero-days could have you pnwed just by visiting a compromised website—certainly something we want to avoid. However, I’m not convinced that Rust is the solution to all our security woes. When Redox OS asked for people to test it, they found all sorts of security bugs. Maybe writing it in Rust minimized some of them, maybe not.

            I don’t think backdoors from government agencies are nearly as big of an issue as zero-days stocked up by government agencies.

            1. 1

              It’s much easier for said agencies and companies to simply add the backdoor instead of trying to find zero-days don’t you think?

      2. 1

        You don’t have to like or use C or C++. But to describe using them as malpractice is just offensive trolling.

        1. 1

          If that came across as trolling, I didn’t mean it that way. This was a post asking for opinions, and that is my honest opinion. I’m also not completely against the use of C and C++, but you must consider attack surface, and the internet is an extremely large one.

      1. 10

        Speaking specifically about the flag anomaly, I much prefer Go’s package because it removes ambiguity. In a getopt(1) program, when I see -abc, this could mean two separate things: either there are three flags -a -b -c written in a shortened form, or there is a single flag -a which takes a string argument bc. Go’s flag removes this ambiguity entirely.

        1. 8

          It doesn’t, because you as an end user don’t know if the program is written in Go, or if the author wrote their own argument handling because they didn’t like the flag package.

          1. 2

            Still, clarity is something to strive towards as we develop software.

            There are other reasons I prefer flag as well. For example, it is possible to provide boolean flags with true defaults, and to disable them with -name=false. This is in contrast to getopt(1)-style flag handling where individual character flags can only be toggled on.

            1. 1

              I personally am a fan of having both — short and long options — around.

              I use the short options when typing in a shell, because it’s convenient. I also usually bunch together multiple options, e.g. ls -ltr, just because of typing efficiency. (I also type python3 -mvenv env without space between the option and the argument, sue me!) For shell scripts on the other hand long options might make code more readable/self-explanatory, especially if co-workers also have to maintain it.

              That’s why I like having the single-dash prefix reserved for short options and double-dash for long options, because both variants have their place.

        1. 3

          Interesting find! So the service in question was not only running as root, but the webserver/app wasn’t even chrooted. Reminds me a bit of the Apache phf exploit from 1996. People, always wear your seatbelt!

          Just as an aside, and I know this article isn’t about any particular language, but Go makes it trivially easy to get a webserver running - but it’s not chrooted and runs as root. Would be nice if it were trivially easy to get a webserver running with basic defenses.

          1. 2

            You can listen on high ports as an unprivileged user and redirect traffic at the firewall (I do this for my Go servers in production).

            1. 2

              but Go makes it trivially easy to get a webserver running - but it’s not chrooted and runs as root.

              If it was dev you’d be at a port above 1024, so a regular user would run it. And if it was prod, wouldn’t you drop privileges after start via your service manager?

              Or were you just saying how easy it is to just sudo ./web-server

              1. 1

                …or use authbind and not worry about privileges at all.

              2. 1

                One of the downsides of chroot is that it’s not implemented on Windows, so it’ll make your webserver not cross-platform. I’m not sure what alternatives there are for Windows?

                1. 1

                  Docker containers usually run their local system as root, and most compiler services I know of run in Docker containers or something similar. Eventually we’ll get to a capability-based reality where you have to explicitly grant permissions rather than explicitly taking them away, but we’re not quite there yet.

                1. 1

                  There seems to be a belief amongst memory safety advocates that it is not one out of many ways in which software can fail, but the most critical ones in existance today, and that, if programmers can’t be convinced to switch languages, maybe management can be made to force them.

                  I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right, but I’m trying to understand it. The quoted statistics about found vulnerabilities seem unconvincing, and are just as likely to indicate that static analysis tools have made these kind of programming errors easy to find in existing codebases.

                  1. 19

                    Not all vulnerabilities are equal. I prioritize those that give attackers full control over my computer. They’re the worst. They can lead to every other problem. Plus, their rootkits or damage might not let you have it back. You can lose the physical property, too. Alex’s field evidence shows memory unsafety causes around 70-80% of this. So, worrying about hackers hitting native code, it’s rational to spend 70-80% of one’s effort eliminating memory unsafety.

                    More damning is that languages such as Go and D make it easy to write high-performance, maintainable code that’s also memory safe. Go is easier to learn with a huge ecosystem behind it, too. Ancient Java being 10-15x slower than C++ made for a good reason not to use it. Now, most apps are bloated/slow, the market uses them anyway, some safe languages are really lean/fast, using them brings those advantages, and so there’s little reason left for memory-unsafe languages. Even in intended use cases, one can often use a mix of memory-safe and -unsafe languages with unsafe used on performance-sensitive or lowest-level parts of the system. Moreover, safer languages such as Ada and Rust give you guarantees by default on much of that code allowing you to selectively turn them off only where necessary.

                    If using unsafe languages and having money, there’s also tools that automatically eliminate most of the memory unsafety bugs. That companies pulling in 8-9 digits still have piles of them show total negligence. Same with those in open-source development who aren’t doing much better. So, on that side of things, whatever tool you encourage should lead to memory safety even with apathetic, incompetent, or rushed developers working on code with complex interactions. Double true if it’s multi-threaded and/or distributed. Safe, orderly-by-default setup will prevent loads of inevitable problems.

                    1. 13

                      The quoted statistics about found vulnerabilities seem unconvincing

                      If studies by security teams at Microsoft and Google, and analysis of Apple’s software is not enough for you, then I don’t know what else could convince you.

                      These companies have huge incentives to prevent exploitable vulnerabilities in their software. They get the best developers they can, they are pouring many millions of dollars into preventing these kinds of bugs, and still regularly ship software with vulnerabilities caused by memory unsafety.

                      “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                      1. 3

                        “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                        No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                        What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                        I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                        1. 9

                          No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                          What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                          The principle cost of memory safety in Rust, IMO, is that the set of valid programs is more heavily constrained. You often here this manifest as “fighting with the borrow checker.” This is definitely an impediment. I think a large portion of folks get past this stage, in the sense that “fighting the borrow checker” is, for the most part, a temporary hurdle. But there are undoubtedly certain classes of programs that Rust will make harder to write, even for Rust experts.

                          Like all trade offs, the hope is that the juice is worth the squeeze. That’s why there has been a lot of effort in making Rust easier to use, and a lot of effort put into returning good error messages.

                          I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                          I’ve seen people ask this before, and my response is always, “what hypothetical study would actually convince you?” If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                          IMO, the most effective way to show this is probably to reason about vulnerabilities due to memory safety in aggregate. But to do that, you need a large corpus of software written in Rust that is also widely used. But even this methodology is not without its flaws.

                          1. 2

                            If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                            That’s true - but my comment was in response to one claiming that the bug surveys published by Microsoft et al should be convincing.

                            I could imagine something similar being done with large Rust code bases in a few years, perhaps.

                            I don’t have enough Rust experience to have a good intuition on this so the following is just an example. I have lots of C++ experience with large code bases that have been maintained over many years by large teams. I believe that C++ makes it harder to write correct software: not (just) because of memory safety issues, undefined behavior etc. but also because the language is so large, complex and surprising. It is possible to write good C++ but it is hard to maintain it over time. For that reason, I have usually promoted C rather than C++ where there has been a choice.

                            That was a bit long-winded but the point I was trying to make is that languages can encourage or discourage different classes of bugs. C and C++ have the same memory safety and undefined behavior issues but one is more likely than the other to engender other bugs.

                            It is possible that Rust is like C++, i.e. that its complexity encourages other bugs even as its borrow checker prevents memory safety bugs. (I am not now saying that is true, just raising the possibility.)

                            This sort of consideration does not seem to come up very often when people claim that Rust is obviously better than C for operating systems, for example. I would love to read an article that takes this sort of thing into account - written by someone with more relevant experience than me!

                            1. 7

                              I’ve been writing Rust for over 4 years (after more than a decade of C), and in my experience:

                              • For me Rust has completely eliminated memory unsafety bugs. I don’t even use debuggers or Valgrind any more, unless I’m integrating Rust with C.
                              • I used to have, at least during development, all kinds of bugs that spray the heap, corrupt some data somewhere, use uninitialized memory, use-after-free. Now I get compile-time errors or panics (which are safe, technically like C++ exceptions).
                              • I get fewer bugs overall. Lack of NULL and mandatory error handling are amazing for reliability.
                              • Built-in unit test framework, richer standard library and easy access to 3rd party dependencies help too (e.g. instead of hand-rolling another own buggy hash table, I use a well-tested well-optimized one).
                              • My Rust programs are much faster. Single-threaded Rust is 95% as fast as single-threaded C, but I can easily parallelize way more than I’d ever dare in C.

                              The costs:

                              • Rust’s compile times are not nice.
                              • It took me a while to become productive in Rust. “Getting” ownership requires unlearning C and a lot of practice. However, I’m not fighting the borrow checker any more, and I’m more productive in Rust thanks to higher-level abstractions (e.g. I can write map/reduce iterator that collects something into a btree — in 1 line).
                        2. 0

                          Of course older software, mostly written in memory-unsafe languages, sometimes written in a time when not every device was connected to a network, contains more known memory vulnerabilities. Especially when it’s maintained and audited by companies with excellent security teams.

                          These statistics don’t say much at all about the overall state of our software landscape. It doesn’t say anything about the relative quality of memory-unsafe codebases versus memory-safe codebases. It also doesn’t say anything about the relative sizes of memory-safe and memory-unsafe codebases on the internet.

                          1. 10

                            iOS and Android aren’t “older software”. They’ve been born to be networked, and supposedly secure, from the start.

                            Memory-safe codebases have 0% memory-unsafety vulnerabilities, so that is easily comparable. For example, check out the CVE database. Even within one project — Android — you can easily see whether the C or the Java layers are responsible for the vulnerabilities (spoiler: it’s C, by far). There’s a ton of data on all of this.

                            1. 2

                              Android is largely cobbled together from older software, as is IOS. I think Android still needs a Fortran compiler to build some dependencies.

                              1. 9

                                That starts to look like a No True Scotsman. When real-world C codebases have vulnerabilities, they’re somehow not proper C codebases. Even when they’re part of flagship products of top software companies.

                                1. 2

                                  I’m actually not arguing that good programmers are able to write memory-safe code in unsafe languages. I’m arguing vulnerabilities happen at all levels in programming, and that, while memory safety bugs are terrible, there are common classes of bugs in more widely used (and more importantly, more widely deployed languages), that make it just one class of bugs out of many.

                                  When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                  We’d have reached some sort of conclusion earlier if you’d argued with the point I was making rather than with the point you wanted me to make.

                                  1. 4

                                    When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                    Actually did. Sites/companies that solved XSS did so by banning generation of markup “by hand”, and instead mandated use of safe-by-default template engines (e.g. JSX). Same with SQL injection: years of saying “be careful, remember to escape” didn’t work, and “always use prepared statements” worked.

                                    These classes of bugs are prevalent only where developers think they’re not a problem (e.g. they’ve been always writing pure PHP, and will continue to write pure PHP forever, because there’s nothing wrong with it, apart from the XSS and SQLi, which are a force of nature and can’t be avoided).

                                    1. 1

                                      This kind of makes me think of someone hearing others talk about trying to lower the murder rate and then hysterically going into a rant about how murder is only one class of crime

                                      1. -1

                                        I think a better analogy is campaigning aggressively to ban automatic rifles when the vast majority of murders are committed using handguns.

                                        Yes, automatic rifles are terrible. But pointing them out as the main culprit behind the high murder rate is also incorrect.

                                        1. 4

                                          That analogy is really terrible and absolutely not fitting the context here. It’s also very skewed, the murder rate is not the reason for calls for bans.

                                    2. 2

                                      Although I mostly agree, I’ll note Android was originally built by a small business acquired by Google that continued to work on it probably with extra resources from Google. That makes me picture a move fast and break things kind of operation that was probably throwing pre-existing stuff together with their own as quickly as possible to get the job done (aka working phones, market share).

                                  2. 0

                                    Yes, if you zoom in on code bases written in memory-unsafe languages, you unsurprisingly get a large number of memory-unsafety vulnerabilities.

                                    1. 12

                                      And that’s exactly what illustrates “eliminates a class of bugs”. We’re not saying that we’ll end up in utopia. We just don’t need that class of bugs anymore.

                                      1. 1

                                        Correct, but the author is arguing that this is an exceptionally grievous class of security bugs, and (in another article) that developers’ judgement should not be trusted on this matter.

                                        Today, the vast majority of new code is written for a platform where execution of untrusted memory-safe code is a core feature, and the safety of that platform relies on a stack of sandboxes written mostly in C++ (browser) and Objective C/C++/C (system libraries and kernel)

                                        Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                        What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                        1. 11

                                          Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                          Hm, so. Apple has developed Swift, which is generally considered a systems programming language, to replace Objective-C, which was their main programming language and already had safety features like baked in ARC. Google has implemented Go. Mozilla Rust. Google uses tons of Rust in Fuchsia and has recently imported the Rust compiler into the Android source tree.

                                          Microsoft has recently been blogging about Rust quite a lot and is often seen hanging around and blogs about how severe memory problems are to their safety story. Before that, Microsoft has spent tons of engineering effort into Haskell as a research base and C#/.Net as a replacement for their C/C++ APIs.

                                          Amazon has implemented firecracker in Rust and bragged about it on their AWS keynote.

                                          Come again about “dipping toes”? Yes, there’s huge amounts of stack around, but there’s also huge amounts to be written!

                                          What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                          Because it’s always been a crisis and now we have the tech to fix it.

                                          P.S.: In case this felt a bit like bragging Rust over the others: it’s just where I’m most aware of things happening. Go and Swift are doing fine, I just don’t follow as much.

                                          1. 2

                                            The same argument was made for Java, which on top of its memory safety, was presented as a pry bar against the nearly complete market dominance of the Wintel platform at the time. Java evangelism managed to convert new programmers - and universities - to Java, but not the entire world.

                                            Oracle’s deadly embrace of Java didn’t move it to rewrite its main cash cow in Java.

                                            Rust evangelists should ask themselves why.

                                            I think that of all the memory-safe languages, Microsoft’s C++/CLI effort comes closest to understanding what needs to be done to entice coders to move their software into a memory-safe environment.

                                            At my day job, I actually try to spend my discretionary time trying to move our existing codebase to a memory-safe language. It’s mostly about moving the pieces into place so that green-field software can seamlessly communicate with our existing infrastructure. Then seeing what parts of our networking code can be replaced, slowly reinforcing the outer layers while the inner core remains memory unsafe.

                                            Delicate stuff, not something you want the VP of Engineering to issue edicts about. In the meantime, I’m still a C++ programmer, and I really don’t appreciate this kind of article painting a big target on my back.

                                            1. 4

                                              Java and Rust are vastly different ball parks for what you describe. And yet, Java is used successfully in the database world, so it is definitely to be considered. The whole search engine database world is full of Java stacks.

                                              Oracle didn’t rewrite its cashcow, because - yes, they are risk-averse and that’s reasonable. That’s no statement on the tech they write it in. But they did write tons of Java stacks around Oracle DB.

                                              It’s an argument on the level of “Why isn’t everything at Google Go now?” or “Why isn’t Apple using Swift for everything?”.

                                              1. 2

                                                Looking at https://news.ycombinator.com/item?id=18442941 it seems that it was too late for a rewrite when Java matured.

                                            2. 8

                                              What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                              To start the multi-decade effort now, and not spend more decades just saying that buffer overflows are fine, or that—despite of 40 years of evidence to the contrary—programmers can just avoid causing them.

                                  3. 9

                                    I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right

                                    You didn’t? SQL injections are still #1 in the OWASP top 10. PHP had to retrain an entire generation of engineers to use mysql_real_escape_string over vulnerable alternatives. I could go on…

                                    I think we have internalized arguments the SQL injection but have still not accepted memory safety arguments.

                                    1. 3

                                      I remember arguments being presented to other programmers. This article (and another one I remembered, which, as it turns out, is written by the same author: https://www.vice.com/en_us/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it ) explicitly target the layperson.

                                      The articles use the language of whistleblowers. It suggests that counter-arguments are made in bad faith, that developers are trying to hide this ‘dirty secret’. Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                      Arguments aimed at programmers, like this one at least acknowledge the counter-arguments, and frame the discussion as one of industry maturity, which I think is correct.

                                      1. 2

                                        I do not see it as bad faith. There are a non-zero number of people who say they can write memory safe C++ despite there being a massive amount of evidence that even the best programmers get tripped up by UB and threads.

                                        1. 1

                                          Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                          There’s an argument to be made that the resurging interest in systems programming languages through Rust, Swift and Go futureproofs experience in those areas.

                                      2. 5

                                        Memory safety advocate here. It is the most pressing issue because it invokes undefined behavior. At that point, your program is entirely meaningless and might do anything. Security issues can still be introduced without memory unsafety of course, but you can at least reason about them, determine the scope of impact, etc.

                                      1. 15

                                        ugh. https://github.com/bdmac/strong_password/blob/master/CHANGELOG#L1

                                        The CHANGELOG doesn’t mention this rubygems incident and it ALSO breaks BC. Maybe I’m overly pessimistic and paranoid but I’d have republished 0.0.6 unchanged as 0.0.8 and released anything new as 0.1.0.

                                        1. 1

                                          Exactly. If you’re doing breaking changes, you shouldn’t increment the patch version…

                                          1. 4

                                            Pre-1.0.0 releases in semantic versioning have no defined compatibility requirements with any other version.

                                            Though, if this module is being used in production, a 1.0.0 release should be cut. Even more so since this is open source and you don’t know all of the consumers.

                                            e: sp

                                            1. 2

                                              fair, that’s a good point.

                                            2. 2

                                              With a 0.0.x I wouldn’t call that a problem, really. It just irks me that I suppose a lot of people might think not to upgrade from 0.0.7 although they should. Or would gem warn you that the installed version was yanked?

                                          1. 10

                                            I’ve been thinking about this a bit as of late as well, as I’m working on some open source programs that I also want to offer as a service, roughly similar to Drew’s SourceHut.

                                            For this, the GPL probably makes more sense. I don’t want to stop anyone from running their own copy of my software, and I don’t mind of they offer it as a service, but I would mind if someone would take the source code, make a few modifications, and then offer that as a service. It’s taken me some time to warm up to this, because I also don’t like restricting people’s freedom and am not a huge fan of GNU/FSF/RMS, but I’ve slowly warmed to the idea that the GPL is a better fit for this project.

                                            For most of my other projects this is not really an issue. For example I recently did some work on a commandline Unicode database querying tool. It’s pretty useful (for me, anyway), but I don’t think anyone is going to add proprietary extensions to this; there’s simply no reason to. The simpler MIT seems like a better fit for this. Even if someone would use it in a proprietary context, I have nothing to lose by it, so why not allow it?

                                            It seems like a “right tool for the job” kind of thing to me.

                                            1. 24

                                              You’d want the AGPL in your case, then, which is designed for network services like SaaS.

                                              1. 2

                                                This article explicitly states that the AGPL does not address the problem of SaaSS (Service as a Software Substitute), as the FSF/GNU call it:

                                                https://www.gnu.org/licenses/why-affero-gpl.html

                                                I otherwise agree—it is designed for software accessed over a network, and is appropriate for this case; it’s just the “SaaS” part I’m commenting on.

                                                1. 3

                                                  I am aware of this stance, but thanks for pointing to it. It is of course true that a SaaS company may process the data in a way that doesn’t provide the freedoms the AGPL attempts to preserve—I just don’t have a better option to suggest. :(

                                              2. 6

                                                Exactly! I posted my licensing philosophy in another thread recently and it’s basically this.

                                                For libraries (which most of my projects are), I do not want a large license, I do not even want any copyright, my sentiment for libraries is very strongly “I’m throwing this crap out there, do whatever the hell you want.” I used to use the WTFPL, but then switched to the more serious Unlicense. But if I were to pick my preferred license for this now, it would be 0BSD :) For end-user apps, I have no problem with copyleft, the one Android app I made a while ago is under the GPL even.

                                                Also to expand on this: if someone uses, like, my http library in a proprietary project, I don’t see it as a corporation exploiting my work, I see it as my work helping another worker do their job.

                                                1. 5

                                                  I quite like the Blue Oak Model License for a permissive license and have started using it in my open source projects (where I have sole copyright and can license/relicense as I please). Compared to the 0BSD license, it discusses patents (to protect all contributors from liability in case any other contributor enforces a patent they own now or later) and there’s also no required copyright line and dates to keep up to date. It’s odd to me that the 0BSD license would remove the need to include the copyright attribution to gain the license, while still including the copyright line at all.

                                                  1. 1

                                                    Last time I saw it, it looks interesting. But I wonder if it is reviewed by other lawyers. Also, does not seem to be OSI/FSF/…-approved yet?

                                                    1. 2

                                                      It’s not, but not due to some issue with the license. The authors are less-than-endorsing of OSI and haven’t applied for approval: https://writing.kemitchell.com/2019/05/05/Rely-on-OSI.html

                                                      1. 1

                                                        That’s unfortunate. I work on a package manager and we aren’t lawyers (or have money to pay), so we default to “whatever DFSG/OSI sees as OK, we do too”

                                                        1. 4

                                                          The Blue Oak Council specifically set out to create a permissive license list first: https://blueoakcouncil.org/list

                                                          The Model License came about due to a lack of the desired qualities in many of the other licenses available for public usage, but it wasn’t the original goal of the project.

                                                          Maybe consider licenses on that list, or parts of the list?

                                                          I also recommend reading the blog post I linked above, because blindly accepting whatever OSI approves will likely not end up well for whoever is accepting the license or that policy.

                                                  2. 1

                                                    You really need patent protection in the license to reduce risk of patent trolling. That’s a huge problem. Most permissive licenses pretend patent law doesn’t exist.

                                                  3. 3

                                                    How does GPL prevent someone from modifying your code and then offering it as a service? The modified code is not being distributed. This is key to the business model of Facebook, Google, Amazon etc and why Reddit changed their license.

                                                    1. 3

                                                      It doesn’t, and that’s okay. But it does prevent people from using my code with their own proprietary extensions without contributing their changes back (the AGPL does, at least).

                                                    2. 4

                                                      I have nothing to lose by it, so why not allow it?

                                                      What people often miss is that applications and libraries in a GPL ecosystem protect each other from patent trolls, tivoization, and, partially, SaaS/cloudification.

                                                      How does the “herd immunity” develops? In the same way companies create large patent portfolios as a legal shield/weapon: if bad actor enters litigation against one project/product/patent it can be sued regarding others.

                                                      1. 7

                                                        There are plenty of other licenses that discuss and protect against patent trolls. Off the top of my head:

                                                        • MPL 2.0
                                                        • CDDL
                                                        • Apache 2
                                                        • Blue Oak Model License
                                                        1. 1

                                                          I did not claim GPL is the only one. Also, some of those do not protect against tivoization or have other issues.

                                                          1. 2

                                                            But do you really care about, say, tivoization, if you wanted to use a more permissive or less copyleft license than the GPL? Patent protection is important for all licenses. Tivoization protection is not.

                                                      2. 1

                                                        There is a deep cultural difference between the Open Source crowd and the Free Software crowd. Open Source crowd says “Right tool for the right job” and the Free Software crowd says “Right tool for the right society”. These are different points of view at a very fundamental level. Open Source people believe in it because they think it makes better software. Free Software people aren’t concerned with making “better quality software”. They think it’s good to make better software, but acknowledge that proprietary might be better in many cases. But that’s not the point of Free Software to them. Free Software people view the GPL as a social hack, not an end in itself.

                                                        1. 2

                                                          Free Software people aren’t concerned with making “better quality software”.

                                                          Citation needed.

                                                          1. 1

                                                            To be more specific, Free Software people don’t view “better quality software” as the end goal. Freedom is the end goal.

                                                      1. 1

                                                        I am fine if people want to make money with my software, as long as whatever they add on to my software is also open source.

                                                        People are willing to pay for software, even open source, as getting from the source code to a finished binary can be a pain. I want an open source license that that doesnt allow people to use my stuff as part of a closed source project, unless they pay me.

                                                        I dont know if such a thing exists, so for now I am using this:

                                                        https://choosealicense.com/licenses/osl-3.0

                                                        1. 3

                                                          You don’t need a single license for this. As long as you maintain sole copyright, you can license the software under any additional license someone is willing to pay for. That license applies to their copy, while the open source (likely copyleft) license applies to everyone else who receives the license without pay.

                                                        1. 1

                                                          I agree with the conclusions in this article yet cannot deny the existence of successful projects that use more permissive licenses, the BSDs being an obvious example. Even if/when a corporation takes the source for their own gain, the community of developers surrounding the original project remains unchanged, assuming the project is not a service wanting to turn a profit.

                                                          When you’re developing a service you hope to charge a fee for, such as sourcehut, it seems wise to use less permissive licenses like GPL, but for the other cases I’m tempted to think it might not make much difference whether you use MIT or GPL?

                                                          edit: clarified point

                                                          1. 2

                                                            I used to think this as well, but I also think that GPL played a large role in the greater popularity that linux has over bsd.

                                                            1. 4

                                                              There was uncertainty about the lawfulness of using the BSD source due to AT&T’s UNIX patents and ongoing litigation.

                                                              1. 1

                                                                That was also a big part. But I think that forcing people to contribute their improvements to the community was also a contributing factor.

                                                                That being said I don’t think any license is one size fits all. And I am very glad that we have great kernels in both gpl and bsd licenses.

                                                          1. 1

                                                            Great talk. I have a couple questions that weren’t answered in the Q&A and I don’t see answered on the Zig website.

                                                            Does the Zig frontend compile-to-C under the hood? This is something that irked me about some other newer modern languages in this same space like Nim due to the inadvertant undefined behavior that it could create rather than going directly to IR (which to be clear could also create other UB, but at least it could be well defined for the language instead of being a bug introduced due to the C conversion).

                                                            How is cross-compiling support for other OSes? In the shown slide, all of the targets were Linux on some arch and libc, but is it just as easy to cross-compile from one OS to another? Actually, this looks supported according to the website!

                                                            1. 12

                                                              Does the Zig frontend compile-to-C under the hood? This is something that irked me about some other newer modern languages in this same space like Nim due to the inadvertant undefined behavior that it could create rather than going directly to IR (which to be clear could also create other UB, but at least it could be well defined for the language instead of being a bug introduced due to the C conversion).

                                                              This is a fallacy, “compiling to X” doesn’t mean you inherit its flaws, that’s the whole point of compiling. UB is a worry when you’re writing C manually, but when compiling to C it’s only a problem if there is a bug in the Nim compiler.

                                                              Asm has an “unknown opcode exception”, C doesn’t. And C compiles to asm.

                                                              1. 7

                                                                Does Nim put checks in place for things like signed overflow? If so, how does it affect performance?

                                                                1. 2

                                                                  Yeah, it does. Haven’t benchmarked it personally so not sure. This is of course all customisable, so if you’re feeling brave you can disable these checks.

                                                                  1. 2

                                                                    Thanks for the response, and doing overflow checks really does seem like the right thing. I remain curious about the effect on performance, as in my opinion this is a serious drawback of C as a compilation target.

                                                              2. 5

                                                                Does the Zig frontend compile-to-C under the hood?

                                                                LLVM-based (like Rust and Crystal), from the looks of it, though Zig’s written in itself now.

                                                                1. 22

                                                                  though Zig’s written in itself now.

                                                                  Clarification: the self-hosted compiler is not able to build anything beyond hello world yet. However the zig compiler that is shipped on ziglang.org/download is in fact a hybrid of C++ and Zig code. It’s actually pretty neat how it works:

                                                                  1. Build all the compiler source into libstage1.a, and userland-shim.cpp into userland-shim.o.
                                                                  2. Link libstage1.a and userland-shim.o into zig0.exe. This is the C++ compiler, but missing some features such as zig fmt, @cImport, and stack traces on assertion failures.
                                                                  3. Use zig0 to build stage2.zig into libuserland.a.
                                                                  4. Link libstage1.a and libuserland.a into zig.exe, which has features such as zig fmt, @cImport, and stack traces on assertion failures.

                                                                  Think about how cool this is, in step 4, the exact same library file is linked against a self-hosted library rather than a c++ shim file, and therefore the re-linked binary gains extra powers!

                                                                  1. 1

                                                                    I love PLs but I’ve rarely sat down and done the work to piece through how magical this stuff is. Thanks!

                                                                2. 4

                                                                  I’d say the Zig cross-compilation story is even better than Go’s. And that’s a really hard bar to meet.

                                                                1. 3

                                                                  The answer to

                                                                  I removed a bad release from my repository but it still appears in the mirror, what should I do?

                                                                  makes me wonder whether there should be a way to flag packages with serious defects as deprecated (perhaps in a bugfix release). Does anyone who’s following the go module story closely know whether that would/wouldn’t make sense / is planned / etc.?

                                                                  1. 3

                                                                    Don’t know if it has been implemented in the main tooling, but it was suggested in the original Vgo blog that a vX.Y.Z+deprecated tag could be added to indicate tags which should not be used.

                                                                    https://research.swtch.com/vgo-module#deprecated_versions

                                                                  1. 4

                                                                    I would recommend building honk as a Go module, as it solves nearly all complaints about GOPATH, special directories, and vendoring.

                                                                    One thing that modules don’t cleanly solve (because it is expected that unfetched dependencies will be fetched from the internet) is being able to download the source and all dependencies in a single archive, but this is possible to work around by extracting all dependency module archives into a directory and pointing to it with GOPROXY.

                                                                    1. 2

                                                                      I have been doing my best to procrastinate out of switching to modules. I like writing go, but I’m less enthusiastic about being a go build engineer. I also wasn’t entirely sure where modules were headed when I wrote the release script, but maybe I’ll come around by the time 1.13 comes out.

                                                                      1. 1

                                                                        Hey Ted, I just tested go modules on a checkout of your repo. Its as simple as go mod init main then commit those two files to your repo and you’re done. Then you can go build or go test or whatever you want.

                                                                        1. 2

                                                                          go mod init main

                                                                          This is bad advice. Instead of main the module name should be humungus.tedunangst.com/r/honk (or whatever) so that it can be fetched, built, and installed automatically, from any directory, with a single go get command.

                                                                          1. 1

                                                                            Awesome. I actually didn’t know that was possible. I thought it was for the package (not module) name. In my local projects just using go mod init was enough, without specifying the module.

                                                                            I did a new checkout and tried your way. Worked perfect and makes much more sense.

                                                                            Thanks!

                                                                    1. 2

                                                                      Is the new flak open source? (and are the old iterations for that matter?)

                                                                      1. 3

                                                                        This is like 4.0, where it’s a major version change only so that the second number doesn’t get too high, right?

                                                                        1. 14

                                                                          But I’d like to point out (yet again) that we don’t do feature-based releases, and that “5.0” doesn’t mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes.

                                                                          Looks like it, yeah.

                                                                          1. 6

                                                                            At this point I think it would be more appropiate to just increase major version? Kind of what firefox did back in the day. The separation between minor and major doesn’t exist here and it’s kind of misleading.

                                                                            1. 5

                                                                              That’s a good point. For some kind of software (e.g. end-user software or software with stable API/ABI) regular versioning, or semantic version don’t make sense. Another idea is to use calendar versioning.

                                                                              1. 4

                                                                                But then the number would get too high!

                                                                                1. 5

                                                                                  It would be devastating to run out!

                                                                                  1. 14

                                                                                    Past 2.147.483.647, it finally starts being a true 64bit operating system.

                                                                                  2. 1

                                                                                    It’s not like we’re gonna run out of integers.

                                                                                    1. 2

                                                                                      Every time I hear something like that, I think about the exhaustion of the IPv4 address space. No, we didn’t run out of numbers… just unsigned 32-bit integers.

                                                                                      I cautiously agree with your assessment in regard to version numbers; if we did have to migrate to something else, it would be an easy migration to make.

                                                                                      1. 2

                                                                                        Every time I hear something like that, I think about the exhaustion of the IPv4 address space. No, we didn’t run out of numbers… just unsigned 32-bit integers.

                                                                                        Well, there’s also how they get whole blocks of addresses. When I studied networking, I figured we should just give out individual addresses or numbers as they use them. One per server or public-facing node. Instead, they did blocks where one company with lots of money could have an easy, organization scheme (esp hierarchical) using piles of I.P. addresses they might not even use at great expense to the Internet in terms of fair distribution.

                                                                                        A single, 64-bit number with a good, routing scheme could’ve gone a long way.

                                                                                        1. 3

                                                                                          Had the ’net been based on single addresses there would not have been a guarantee of route locality between neighbouring addresses. In the days of yore when memory was expensive and Cisco-memory double-plus-ungood expensive it would have been more or less impossible to implement a scheme like that and still keep the routing infrastructure going. Routing tables were overflowing as it was, even when routing on base of AS instead of on single addresses.

                                                                                          1. 1

                                                                                            Had the ’net been based on single addresses there would not have been a guarantee of route locality between neighbouring addresses.

                                                                                            I think we could build that into the routing protocols. They were already using metadata in the form of the IP addresses themselves and what was in routing protocols. I can’t tell you exactly what it would look like but we did develop similar ideas for domains with DNS.

                                                                                            1. 2

                                                                                              Of course it could have been built in, that was never the problem. The problem was that the routing tables would have become too big to fit in the relatively small amount of memory available on core routers back then.

                                                                                          2. 2

                                                                                            Yeah. Oh well. We can try that on the next planet, I guess. That’s not what IPv6 does, and it’s hard to imagine going through another multi-decade migration without a very strong reason.

                                                                                          3. 1

                                                                                            I was mostly being a smart-ass. I have heard people object to UUIDs as identifiers for some large but not-that-large set of entities, and I always respond “we’re not going to run out of integers, people”.

                                                                                      2. 2

                                                                                        I don’t like Firefox’s numbering scheme. I can never remember what version of Firefox is meant to be the latest.

                                                                                        It would be much better to use dates. Linux 19.3 for the March release, Linux 19.5 for May, etc. It comes out every 2-3 months. Or perhaps just 19.1 for the first release of 2019, 19.2 for the second, etc.

                                                                                        1. 1

                                                                                          Y2.1K

                                                                                          1. 1

                                                                                            Yes, I agree just a single number probably is not that much better (although I think it’s better than what Linux does now where minor and major means exactly the same).

                                                                                            I think your CalVer idea is better.

                                                                                      3. 6

                                                                                        This is always the case with the linux kernel.

                                                                                      1. 1

                                                                                        Is there a lobsters room?

                                                                                        1. 1

                                                                                          I don’t see one when querying the matrix.org room directory (but that doesn’t mean one doesn’t exist).

                                                                                          1. 1

                                                                                            Not that I know of, but there’s one on IRC/Freenode you can connect to.

                                                                                            1. 2

                                                                                              Yeah, I use Riot to access that. Would be nice to have an official bridge to Matrix/Riot.

                                                                                              1. 1

                                                                                                Ok, I found the link: https://matrix.to/#/!XMCeiYjoZpFFyvICGb:matrix.org?via=matrix.org&via=gpmatrix.com&via=t2bot.io

                                                                                                Would be nice to have an official bridge to Matrix/Riot.

                                                                                                Personally I don’t see much of a difference. The channel is listed under https://lobste.rs/chat, and as matrix users were just connecting via a special bouncer so to speak. At most, if should have to be officially listed somewhere.

                                                                                                1. 1

                                                                                                  You get a proper name, it’s easier to find and new users can see the channel history.

                                                                                          1. 13

                                                                                            The counter argument would be Moxie of course:

                                                                                            One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we’ve developed requires centralization; it’s entirely possible to build a federated Signal Protocol-based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all.

                                                                                            So the big challenge will come when users expect some new feature which ActivityPub currently does not provide.

                                                                                            1. 15

                                                                                              Mastodon and the ActivityPub community have been iterating and pumping out new features on a rapid basis. On a protocol levle, ActivityPub itself is an iteration on the Activity Streams and ActivityPump protocols; themselves an iteration on OStatus. And there are plenty of ActivityPub instances that weren’t initially envisioned: PeerTube, MediaGoblin, NextCloud, … and chess?

                                                                                              I suppose moxie would argue that Mastodon isn’t or won’t be competitive.

                                                                                              I argue Signal, just like Twitter, will run out of money.

                                                                                              1. 4

                                                                                                Signal will become what WhatsApp was meant to become. WhatsApp could have been a secure messaging layer for businesses and consumers but Facebook made them an offer they couldn’t refuse so that dream wasn’t realized.

                                                                                                Signal now has a foundation and they have one of the original founders of WhatsApp bankrolling the operation. I don’t think they will run out of money and might even realize the original WhatsApp dream.

                                                                                                1. 1

                                                                                                  Want to longbet?

                                                                                                  1. 1

                                                                                                    Sure.

                                                                                              2. 10

                                                                                                That quote is not really a good counter argument, it basically reads like “federation is bad because I said so.” You have to read the rest of his post to tease out his arguments:

                                                                                                • federation makes it difficult to make changes
                                                                                                • federation still favors a service single provider (e.g. gmail and email)

                                                                                                (Note: I don’t agree with moxie, just posting his counter argument for others to read)

                                                                                                1. 8

                                                                                                  The counter argument would be Moxie of course

                                                                                                  I’d have a lot easier time taking his arguments seriously if he hadn’t threatened legal action against a free software project simply for trying to build an interoperable client.

                                                                                                  1. 4

                                                                                                    Mastodon seems to cope quite well with this, possibly because there are few implementations and upgrading the server application isn’t too hard.

                                                                                                    But I think the counter argument is entirely correct - it’s not possible (or at least very hard) to build a competitive federated messenger - and that’s completely fine. Competition is one of the parts of the centralised model that leads to de-prioritising users needs so that platforms can be monetise to keep it alive and “competitive”.

                                                                                                    1. 5

                                                                                                      Wait, what about matrix though?

                                                                                                      1. 2

                                                                                                        To clarify my opinion a bit - I’m suggesting that federated networks won’t succeed by the metrics used to measure if something is “competitive”, not that federated networks don’t work. I think Mastodon and Matrix are both really good projects that will be much better than the alternatives long term, since there won’t be many incentives not to prioritise the needs of their users.

                                                                                                        1. 2

                                                                                                          Matrix from what I heard has scaling issues; we’re talking “three people on a single server massively increases load” bad. I think it’s due to protocol flaws?

                                                                                                          1. 5

                                                                                                            Any of matrix’s scaling issues come from federation (trying to sync room state across many different homeserver instances) and the poor state resolution algorithm they were using up until this past summer. Three (or thousands) of users on a single server participating in a room is not a concern, as that is a centralized instance.

                                                                                                            Highly recommend following the matrix blog and TWIM for project updates, especially for anything about synapse (their reference homeserver implementation). It was recently updated to python 3 and the memory footprint has drastically reduced. Keep a lookout for the “next generation” homeserver implementation, Dendrite, sometime after the Matrix 1.0 spec releases.

                                                                                                            1. 2

                                                                                                              I remember reading that this was because the current reference server implementation is simply not optimized. They’re rewriting it in Go (IIRC the new server is called Dendrite), but we’ll have to wait and see how performance changes.

                                                                                                      1. 5

                                                                                                        This is quite timely. I just set up OpenBSD on a new (to me) 15” PowerBook G4 and have been using it over the past week. Quite happy with how well it runs, and it’s astonishing that a 14 year old laptop can still be so good.

                                                                                                        1. 3

                                                                                                          So I’m not a Go guy, but I’ve been following it from the sidelines because I’m a huge fan of Rob Pike and his work and the language itself appeals to me.

                                                                                                          One of the reasons why I like C is that the differences between C11 and C89 are fairly minimal. I regularly work with C code that’s from before 1989 and it still generally compiles with only a few minor changes.

                                                                                                          So watching from the sidelines with Go I’m worried that Go 2 is going to be just an enormous change, to the point that there’s no reason to learn Go 1.

                                                                                                          Anyone with Go experience and insight into Go 2 have anything to assuage my fears?

                                                                                                          1. 6

                                                                                                            They said something like “we can afford somewhere between two and five breaking changes in go 2, and we’ll have tooling to assist the upgrade.”

                                                                                                            1. 3

                                                                                                              From the official blog:

                                                                                                              Go 2 must bring along all those developers. We must ask them to unlearn old habits and learn new ones only when the reward is great.

                                                                                                              Maybe we can do two or three, certainly not more than five.

                                                                                                              I’m focusing today on possible major changes, such as additional support for error handling, or introducing immutable or read-only values, or adding some form of generics, or other important topics not yet suggested.

                                                                                                            2. 3

                                                                                                              I write a lot of Go at my current job.

                                                                                                              While I have my qualms with the language (generics pls) I wouldn’t be worried at all about Go 1 knowledge being irrelevant for Go 2. The language probably won’t change too much. It’s very clear that they aren’t starting over and instead taking a practical look at what Go 1 doesn’t handle well and finding solutions for that. If anything, those solutions may make you appreciate Go 2 even more.

                                                                                                            1. 16

                                                                                                              Whereas most OS’ include proprietary, closed source drivers, OpenBSD does not, by default. Closed source drivers can’t be audited, thus forming an unknown attack vector. It might be bug-ridden, vulnerable, unfree licensed, etcetera. Of course, for your convenience, if you would like to go down the rabbit hole, there is fw_update.

                                                                                                              That sounds a bit confused.

                                                                                                              Many devices are just dead bricks of silicon without firmware (a small embedded OS) than runs on the device. So unless you run the firmware, you have bought a brick.

                                                                                                              fw_update(1) installs the hardware vendor’s non-free firmware (running on the device) to make the device operate so that drivers (running in the kernel, and always free in OpenBSD’s case) can use the device.

                                                                                                              1. 11

                                                                                                                And to add on to this, fw_update is only needed in cases where OpenBSD is unable to include the firmware in the base install because redistribution is prohibited. Other (including closed source) firmware can already be found in a clean install in /etc/firmware.

                                                                                                                1. 1

                                                                                                                  What does redistribution mean in this case? What makes downloading it in an arbitrary tarball from ftp.openbsd.org not okay, but downloading it in an arbitrary tarball from firmware.openbsd.org okay?

                                                                                                                  1. 2

                                                                                                                    In some cases redistribution is ok. The line is really more about stuff on the ftp server is free (to modify, etc.) and the firmware stuff is not. There’s also only one firmware server. It’s not mirrored. So for some of the files that are in a bit of a grey area, mirrors aren’t exposed to any risk.

                                                                                                                    1. 1

                                                                                                                      There are firmware mirrors (round robin dns) but indeed they’re separate from the ftp mirrors.

                                                                                                                2. 4

                                                                                                                  I think the distinction is between drivers and firmware? OpenBSD does not ship driver blobs (which run on the main CPU), but does allow you to update firmwares (which run on the device).

                                                                                                                  1. 2

                                                                                                                    fw_update does not update drivers. The author’s comments implied they believed it does.

                                                                                                                1. 5

                                                                                                                  As an update, the drivers were updated to remove the shortcut: https://twitter.com/CatalystMaker/status/857766176910446596

                                                                                                                  1. 10

                                                                                                                    Without reading this article (too many words, as noted by others), I have to make a fly-by comment just based on the title and opening sentences. I highly recommend anyone looking for a very well written story that can only be told as a video game to try out NieR:Automata (for PS4 and Steam). The article seems to be written entirely around the assumption that these can’t or don’t exist, which is patently false.

                                                                                                                    If you play NieR, please heed the message after you “beat” the game and keep playing using the same save file. You’ve only scratched the surface of the content at that point.

                                                                                                                    1. 3

                                                                                                                      NieR: Automata is one of the best games to be released in a long time.

                                                                                                                      Much like the Metal Gear Solid series, NieR: Automata can succeed only as a game because it effectively uses the medium to convey something much more than a narrative despite occasionally ham-fisted writing. In other words, the strength of the entire presentation overcomes the weaknesses in writing. To truly appreciate these games, however, you need at least a cursory ‘education’ in video games. A lot of what makes them brilliant is their willingness to tamper with players’ expectations. You just don’t see that happen much.

                                                                                                                      1. 1

                                                                                                                        Can confirm!