1. 3

    I use zsh

    PS1='%n@%m %2d%% '
    
    RPROMPT="%* %W"
    

    I don’t update my environment very much; I’ve taken my basic UI (Window Maker, zsh, xterm, Emacs, a few other things) with me across multiple Linux distros. It’s old enough that I’m pretty sure it predates emoji support in widely-available terminal fonts, for example.

    1. 1
      PS1="`hostname | awk -F . '{print $1}'`%# ";
      RPS1='%d';
      

      I think I last modified mine in 2003 or something? I remember choosing zsh at the time for the simple reason that it supported right-aligned prompts. I’m not sure if the hostname-into-awk nonsense predates the %m escape code, or if I simply was unaware of it at the time.

    1. 2

      It’s articles like this that make me question the viability of remote work. These big, savvy companies, who could theoretically grow their workforce almost anywhere in the world, seem to be willing to pay a premium for engineers who are located not just in the US, but a specific area of the US.

      If remote software developers were truly as effective as co-located developers, you’d expect salaries to even out. Instead, there are differences of an order of magnitude, seemingly mostly based on location.

      1. 2

        It’s hard to be an effective remote engineer. Some people can do it but it’s not for everybody.

        1. 1

          There is a group of people (even more vocal in my country, I think) who argue that remote work is the future, and it’s true that remote work, aided by the availability of broadband, has been growing. And it’s a very attractive idea.

          On the opposite side of that argument, the component of people’s compensation that is location-based seems to be increasing, not decreasing. This is assuming that fresh grads in SV are at least somewhat comparable to fresh grads elsewhere.

          1. 1

            Remote work is growing precisely because it’s so much cheaper to hire engineers in places outside the Bay Area but that’s where the VCs are.

      1. 11

        Conspicuously absent is xfig, an easy-to-use vector image editor. I used it for a bunch of projects before Inkscape rolled into town. Looks to still be maintained today unlike most of the programs in this list.

        1. 2

          xfig also has one of the few implementations of x-splines (x means “cross” here, like “pedestrian xing”, unrelated to the X window system). I find x-splines very nice and intuitive.

          Here’s a little x-spline implementation I made:

          https://jordi.platinum.edu.pl/xsplines/splines.html

          1. 1

            It was easy to learn because at each step, it showed an explanation what would happen if you clicked the left, right, or middle button. It was a very simple affordance that few applications since have copied.

            I used it long after ‘better’ tools became available. It was ridiculously easy for making diagrams.

          1. 1

            There seems to be a belief amongst memory safety advocates that it is not one out of many ways in which software can fail, but the most critical ones in existance today, and that, if programmers can’t be convinced to switch languages, maybe management can be made to force them.

            I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right, but I’m trying to understand it. The quoted statistics about found vulnerabilities seem unconvincing, and are just as likely to indicate that static analysis tools have made these kind of programming errors easy to find in existing codebases.

            1. 19

              Not all vulnerabilities are equal. I prioritize those that give attackers full control over my computer. They’re the worst. They can lead to every other problem. Plus, their rootkits or damage might not let you have it back. You can lose the physical property, too. Alex’s field evidence shows memory unsafety causes around 70-80% of this. So, worrying about hackers hitting native code, it’s rational to spend 70-80% of one’s effort eliminating memory unsafety.

              More damning is that languages such as Go and D make it easy to write high-performance, maintainable code that’s also memory safe. Go is easier to learn with a huge ecosystem behind it, too. Ancient Java being 10-15x slower than C++ made for a good reason not to use it. Now, most apps are bloated/slow, the market uses them anyway, some safe languages are really lean/fast, using them brings those advantages, and so there’s little reason left for memory-unsafe languages. Even in intended use cases, one can often use a mix of memory-safe and -unsafe languages with unsafe used on performance-sensitive or lowest-level parts of the system. Moreover, safer languages such as Ada and Rust give you guarantees by default on much of that code allowing you to selectively turn them off only where necessary.

              If using unsafe languages and having money, there’s also tools that automatically eliminate most of the memory unsafety bugs. That companies pulling in 8-9 digits still have piles of them show total negligence. Same with those in open-source development who aren’t doing much better. So, on that side of things, whatever tool you encourage should lead to memory safety even with apathetic, incompetent, or rushed developers working on code with complex interactions. Double true if it’s multi-threaded and/or distributed. Safe, orderly-by-default setup will prevent loads of inevitable problems.

              1. 13

                The quoted statistics about found vulnerabilities seem unconvincing

                If studies by security teams at Microsoft and Google, and analysis of Apple’s software is not enough for you, then I don’t know what else could convince you.

                These companies have huge incentives to prevent exploitable vulnerabilities in their software. They get the best developers they can, they are pouring many millions of dollars into preventing these kinds of bugs, and still regularly ship software with vulnerabilities caused by memory unsafety.

                “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                1. 3

                  “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                  No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                  What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                  I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                  1. 9

                    No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                    What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                    The principle cost of memory safety in Rust, IMO, is that the set of valid programs is more heavily constrained. You often here this manifest as “fighting with the borrow checker.” This is definitely an impediment. I think a large portion of folks get past this stage, in the sense that “fighting the borrow checker” is, for the most part, a temporary hurdle. But there are undoubtedly certain classes of programs that Rust will make harder to write, even for Rust experts.

                    Like all trade offs, the hope is that the juice is worth the squeeze. That’s why there has been a lot of effort in making Rust easier to use, and a lot of effort put into returning good error messages.

                    I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                    I’ve seen people ask this before, and my response is always, “what hypothetical study would actually convince you?” If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                    IMO, the most effective way to show this is probably to reason about vulnerabilities due to memory safety in aggregate. But to do that, you need a large corpus of software written in Rust that is also widely used. But even this methodology is not without its flaws.

                    1. 2

                      If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                      That’s true - but my comment was in response to one claiming that the bug surveys published by Microsoft et al should be convincing.

                      I could imagine something similar being done with large Rust code bases in a few years, perhaps.

                      I don’t have enough Rust experience to have a good intuition on this so the following is just an example. I have lots of C++ experience with large code bases that have been maintained over many years by large teams. I believe that C++ makes it harder to write correct software: not (just) because of memory safety issues, undefined behavior etc. but also because the language is so large, complex and surprising. It is possible to write good C++ but it is hard to maintain it over time. For that reason, I have usually promoted C rather than C++ where there has been a choice.

                      That was a bit long-winded but the point I was trying to make is that languages can encourage or discourage different classes of bugs. C and C++ have the same memory safety and undefined behavior issues but one is more likely than the other to engender other bugs.

                      It is possible that Rust is like C++, i.e. that its complexity encourages other bugs even as its borrow checker prevents memory safety bugs. (I am not now saying that is true, just raising the possibility.)

                      This sort of consideration does not seem to come up very often when people claim that Rust is obviously better than C for operating systems, for example. I would love to read an article that takes this sort of thing into account - written by someone with more relevant experience than me!

                      1. 7

                        I’ve been writing Rust for over 4 years (after more than a decade of C), and in my experience:

                        • For me Rust has completely eliminated memory unsafety bugs. I don’t even use debuggers or Valgrind any more, unless I’m integrating Rust with C.
                        • I used to have, at least during development, all kinds of bugs that spray the heap, corrupt some data somewhere, use uninitialized memory, use-after-free. Now I get compile-time errors or panics (which are safe, technically like C++ exceptions).
                        • I get fewer bugs overall. Lack of NULL and mandatory error handling are amazing for reliability.
                        • Built-in unit test framework, richer standard library and easy access to 3rd party dependencies help too (e.g. instead of hand-rolling another own buggy hash table, I use a well-tested well-optimized one).
                        • My Rust programs are much faster. Single-threaded Rust is 95% as fast as single-threaded C, but I can easily parallelize way more than I’d ever dare in C.

                        The costs:

                        • Rust’s compile times are not nice.
                        • It took me a while to become productive in Rust. “Getting” ownership requires unlearning C and a lot of practice. However, I’m not fighting the borrow checker any more, and I’m more productive in Rust thanks to higher-level abstractions (e.g. I can write map/reduce iterator that collects something into a btree — in 1 line).
                  2. 0

                    Of course older software, mostly written in memory-unsafe languages, sometimes written in a time when not every device was connected to a network, contains more known memory vulnerabilities. Especially when it’s maintained and audited by companies with excellent security teams.

                    These statistics don’t say much at all about the overall state of our software landscape. It doesn’t say anything about the relative quality of memory-unsafe codebases versus memory-safe codebases. It also doesn’t say anything about the relative sizes of memory-safe and memory-unsafe codebases on the internet.

                    1. 10

                      iOS and Android aren’t “older software”. They’ve been born to be networked, and supposedly secure, from the start.

                      Memory-safe codebases have 0% memory-unsafety vulnerabilities, so that is easily comparable. For example, check out the CVE database. Even within one project — Android — you can easily see whether the C or the Java layers are responsible for the vulnerabilities (spoiler: it’s C, by far). There’s a ton of data on all of this.

                      1. 2

                        Android is largely cobbled together from older software, as is IOS. I think Android still needs a Fortran compiler to build some dependencies.

                        1. 9

                          That starts to look like a No True Scotsman. When real-world C codebases have vulnerabilities, they’re somehow not proper C codebases. Even when they’re part of flagship products of top software companies.

                          1. 2

                            I’m actually not arguing that good programmers are able to write memory-safe code in unsafe languages. I’m arguing vulnerabilities happen at all levels in programming, and that, while memory safety bugs are terrible, there are common classes of bugs in more widely used (and more importantly, more widely deployed languages), that make it just one class of bugs out of many.

                            When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                            We’d have reached some sort of conclusion earlier if you’d argued with the point I was making rather than with the point you wanted me to make.

                            1. 4

                              When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                              Actually did. Sites/companies that solved XSS did so by banning generation of markup “by hand”, and instead mandated use of safe-by-default template engines (e.g. JSX). Same with SQL injection: years of saying “be careful, remember to escape” didn’t work, and “always use prepared statements” worked.

                              These classes of bugs are prevalent only where developers think they’re not a problem (e.g. they’ve been always writing pure PHP, and will continue to write pure PHP forever, because there’s nothing wrong with it, apart from the XSS and SQLi, which are a force of nature and can’t be avoided).

                              1. 1

                                This kind of makes me think of someone hearing others talk about trying to lower the murder rate and then hysterically going into a rant about how murder is only one class of crime

                                1. -1

                                  I think a better analogy is campaigning aggressively to ban automatic rifles when the vast majority of murders are committed using handguns.

                                  Yes, automatic rifles are terrible. But pointing them out as the main culprit behind the high murder rate is also incorrect.

                                  1. 4

                                    That analogy is really terrible and absolutely not fitting the context here. It’s also very skewed, the murder rate is not the reason for calls for bans.

                              2. 2

                                Although I mostly agree, I’ll note Android was originally built by a small business acquired by Google that continued to work on it probably with extra resources from Google. That makes me picture a move fast and break things kind of operation that was probably throwing pre-existing stuff together with their own as quickly as possible to get the job done (aka working phones, market share).

                            2. 0

                              Yes, if you zoom in on code bases written in memory-unsafe languages, you unsurprisingly get a large number of memory-unsafety vulnerabilities.

                              1. 12

                                And that’s exactly what illustrates “eliminates a class of bugs”. We’re not saying that we’ll end up in utopia. We just don’t need that class of bugs anymore.

                                1. 1

                                  Correct, but the author is arguing that this is an exceptionally grievous class of security bugs, and (in another article) that developers’ judgement should not be trusted on this matter.

                                  Today, the vast majority of new code is written for a platform where execution of untrusted memory-safe code is a core feature, and the safety of that platform relies on a stack of sandboxes written mostly in C++ (browser) and Objective C/C++/C (system libraries and kernel)

                                  Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                  What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                  1. 11

                                    Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                    Hm, so. Apple has developed Swift, which is generally considered a systems programming language, to replace Objective-C, which was their main programming language and already had safety features like baked in ARC. Google has implemented Go. Mozilla Rust. Google uses tons of Rust in Fuchsia and has recently imported the Rust compiler into the Android source tree.

                                    Microsoft has recently been blogging about Rust quite a lot and is often seen hanging around and blogs about how severe memory problems are to their safety story. Before that, Microsoft has spent tons of engineering effort into Haskell as a research base and C#/.Net as a replacement for their C/C++ APIs.

                                    Amazon has implemented firecracker in Rust and bragged about it on their AWS keynote.

                                    Come again about “dipping toes”? Yes, there’s huge amounts of stack around, but there’s also huge amounts to be written!

                                    What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                    Because it’s always been a crisis and now we have the tech to fix it.

                                    P.S.: In case this felt a bit like bragging Rust over the others: it’s just where I’m most aware of things happening. Go and Swift are doing fine, I just don’t follow as much.

                                    1. 2

                                      The same argument was made for Java, which on top of its memory safety, was presented as a pry bar against the nearly complete market dominance of the Wintel platform at the time. Java evangelism managed to convert new programmers - and universities - to Java, but not the entire world.

                                      Oracle’s deadly embrace of Java didn’t move it to rewrite its main cash cow in Java.

                                      Rust evangelists should ask themselves why.

                                      I think that of all the memory-safe languages, Microsoft’s C++/CLI effort comes closest to understanding what needs to be done to entice coders to move their software into a memory-safe environment.

                                      At my day job, I actually try to spend my discretionary time trying to move our existing codebase to a memory-safe language. It’s mostly about moving the pieces into place so that green-field software can seamlessly communicate with our existing infrastructure. Then seeing what parts of our networking code can be replaced, slowly reinforcing the outer layers while the inner core remains memory unsafe.

                                      Delicate stuff, not something you want the VP of Engineering to issue edicts about. In the meantime, I’m still a C++ programmer, and I really don’t appreciate this kind of article painting a big target on my back.

                                      1. 4

                                        Java and Rust are vastly different ball parks for what you describe. And yet, Java is used successfully in the database world, so it is definitely to be considered. The whole search engine database world is full of Java stacks.

                                        Oracle didn’t rewrite its cashcow, because - yes, they are risk-averse and that’s reasonable. That’s no statement on the tech they write it in. But they did write tons of Java stacks around Oracle DB.

                                        It’s an argument on the level of “Why isn’t everything at Google Go now?” or “Why isn’t Apple using Swift for everything?”.

                                        1. 2

                                          Looking at https://news.ycombinator.com/item?id=18442941 it seems that it was too late for a rewrite when Java matured.

                                      2. 8

                                        What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                        To start the multi-decade effort now, and not spend more decades just saying that buffer overflows are fine, or that—despite of 40 years of evidence to the contrary—programmers can just avoid causing them.

                            3. 9

                              I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right

                              You didn’t? SQL injections are still #1 in the OWASP top 10. PHP had to retrain an entire generation of engineers to use mysql_real_escape_string over vulnerable alternatives. I could go on…

                              I think we have internalized arguments the SQL injection but have still not accepted memory safety arguments.

                              1. 3

                                I remember arguments being presented to other programmers. This article (and another one I remembered, which, as it turns out, is written by the same author: https://www.vice.com/en_us/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it ) explicitly target the layperson.

                                The articles use the language of whistleblowers. It suggests that counter-arguments are made in bad faith, that developers are trying to hide this ‘dirty secret’. Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                Arguments aimed at programmers, like this one at least acknowledge the counter-arguments, and frame the discussion as one of industry maturity, which I think is correct.

                                1. 2

                                  I do not see it as bad faith. There are a non-zero number of people who say they can write memory safe C++ despite there being a massive amount of evidence that even the best programmers get tripped up by UB and threads.

                                  1. 1

                                    Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                    There’s an argument to be made that the resurging interest in systems programming languages through Rust, Swift and Go futureproofs experience in those areas.

                                2. 5

                                  Memory safety advocate here. It is the most pressing issue because it invokes undefined behavior. At that point, your program is entirely meaningless and might do anything. Security issues can still be introduced without memory unsafety of course, but you can at least reason about them, determine the scope of impact, etc.

                                1. 10

                                  Just for comparison - here (CZ in EU) a company is required to pay you for the period when such an agreement is in effect and prevents you to perform an competing job.

                                  1. 6

                                    He mentions that in the post. To my understanding, that’s even EU-wide (edit: see below, it is not). (it definitely is in Germany)

                                    The reasoning in Germany is definitely that employee mobility is seen as a good thing and therefore strongly protected.

                                    1. 1

                                      It’s not EU-wide. In the Netherlands non-compete clauses are valid (and quite generic and common, even, particularly in tech). A judge can overturn it if it prevents you from finding work at all, but there’s no guarantee you’ll get paid.

                                      One of the worst rumours I’ve heard was about a company where people left not because they had better offers elsewhere, but because work conditions were shitty. Management threatened to invoke the nc to keep them from abandoning ship altogether. I wonder how much of that was true, and how it went.

                                      1. 2

                                        Right, thanks!

                                        Now that you mention that, I know about at least on Netherlands company trying to enforce a non-compete on an employee of their German subsidiary, being confused when he insisted on the conditions mandated by law.

                                  1. 5

                                    There was exactly one moment when I was willing to consider the performance of that code, and it was the moment my shell command stuttered.

                                    If you can dismiss the issue at your whim, maybe the performance of the code wasn’t that relevant in the first place? I have trouble understanding this. If my browser decides to do some garbage collection right when I’m compiling, that’s likely to have vastly more impact on compilation speed than one unit test that’s slightly slower. I can make a change that adds 50ms to a 100ms function. Or I can make a change that turns a 0.1ms function into a 1ms function. Undetectable using this method, but a much worse regression.

                                    Of course, Go has a pretty good built-in system (go bench) for collecting objective, reproducible data about performance regressions. But you have to decide upfront that code needs to be benchmarked, and how it needs to be done.

                                    I know that for most software, desired performance isn’t specified, so code has a natural tendency to get slower over time. But there has to be a better way to stop this than relying on compiler interactivity.