1. 2

    GNU tar has an incremental format also, which should be long-term supported. It does things on a per-file basis though, rather than diffing files, so wouldn’t be suitable if you regularly have large binary files that undergo small changes.

    1. 2

      Yeah I looked at that. Avoiding it for two reasons:

      • I don’t want to keep around the “listed-incremental” file
      • It’s only supported by GNU tar
    1. 15

      In addition to all the resume reviews, I’ll add that your location strategy might be working against you as well, namely living in the SF Bay Area but applying to Seattle

      In Feb 2017, I was living in Portland, decided to move back to SF, and started my job search only for SF. I was interviewing for 3 months finishing the move in May 2017. I had a few onsites which involved flying but the response rate always felt low for the initial application

      After moving back, it felt like the floodgates opened back up. I had the same amount of onsites in those 3 months in 1 month

      The message here is:

      • Recruiters might be filtering against people who aren’t local (reasoning might include interview process taking longer, no desire to pay relocation, might assume you improperly applied) so conditions might improve once you move
      • Applying to Seattle instead of SF Bay Area might be working against you
      1. 5

        Some companies assume relocation assistance would need to be factored in for non-local candidates (if the position isn’t remote friendly), and thus higher acquisition costs.

        My guess is that you are right and this would indeed be a strong filtering criteria.

        1. 1

          I’m surprised that would figure strongly, given what tech salaries are, unless people are being offered much larger relocation packages than I have been. For a non-international move I’ve been offered $5k usually, maybe negotiable up to $10k, which is basically in the salary-negotiation noise, a single-digit percentage of the first year’s pay.

          1. 2

            10k is single-digit %? I’d love to talk to your agent!

            But yeah, otherwise this accords with my experience.

            1. 1

              “9” is a single digit.

              1. 1

                That it is. I may ought to have had more coffee before posting.

        2. 2

          I don’t know how much of the following applies in the US…

          If the problem is that they won’t consider you because you’re not in the area, then if you have friends or family in the area, you might be able to use their address e.g. on your CV to get through the initial “not in the area” filter.

          The company I work for is always a little suspicious of applicants who aren’t local, mainly because:

          • There’s a good chance they’re either just doing a countrywide search, or a recruiter is on their behalf. If they don’t really have any intention of moving, then it’s a waste of time interviewing them.
          • Especially for young graduates, there’s a fairly high chance that they’ll spend a year or two here, then decide that the area isn’t for them and move back home / to the cities. (There’s nothing wrong with doing that, but if a company thinks you’re going to leave, they’re less likely to invest time and resources in you than in an identical candidate who’s likely to stay for longer.)

          The way to get past these blocks is to tell a convincing story about why you want to move to the area. If you have family in the area, that will look promising. If you’ve lived in the area before, that’s also worth mentioning.

        1. 2

          Packing up my apartment to move to the USA next week (from the UK). My furniture isn’t really fancy enough to be worth shipping transatlantically so this involves a bunch of selling stuff on Gumtree and Facebook Marketplace. Mildly stressful, but not too bad so far.

          1. 2

            Out of the frying pan into the fire :) Good luck! Where in the US will you land?

            1. 3

              Washington, DC, for a job as assistant professor in CS at American University.

              And thanks! Will definitely be a change of scenery, pros and cons but hopefully good overall. Right now I live in a 20,000-person seaside town that’s 5 hours from London, while in a few weeks I’ll be living right in the Imperial Capital!

              1. 2

                Congratulations! DC is a fun town! If you drink beer the bar scene is rather lively (I’m a beer fan and so am rather fond of the U Street Saloon . Lots of great culture happening there.

          1. 4

            At what stage in the interview process do you have this in mind? If it’s late-ish, seems plausible. If it’s early-ish, seems like a lot to ask up front from a candidate to spend a day grokking a codebase when they’re still at the point where they might be summarily rejected after 10 minutes’ review. You do mention that some people might not have the time, but even for those who do, is it a good use of their time?

            The academic-job version of this is a university wanting your initial application for a faculty position to come with a custom-made syllabus for one of their new courses, or a review of their degree program with suggested revisions, or something of that kind. (Distinct from asking you to send in an example syllabus of a course you might teach or have taught in the past, which isn’t custom, employer-specific work.) I usually pass on applying to those. I am happy to prep custom material if I made it to the shortlist and the potential employer shows they’re serious enough about me as a candidate to fly me out for an on-site interview, though. Then it seems fair and more likely to be time not wasted.

            1. 4

              I figured this would replace the in person technical interviews. So for a company, the recruiting flow might be

              1. Initial Phone Interview
              2. Maybe a fast phone technical filter interview
              3. Give them the project, tell them to make a change (and that they’ll be reviewing a couple of PRs for the onsite)
              4. On site: culture fit, discuss their change to the codebase, have them do the code review
              5. Hire/no hire decision.
              1. 1

                For me, this would be a red flag - 2/3 interviews (including the absolute worst kind, the phone interview) and a day long coding test? Maybe if I was applying to be CTO of a Fortune 500 company. For a lowly developer, this is too much.

                1. 4

                  You’ve definitely been luckier with interviews than I have. Every company I’ve ever interviewed with had at least three rounds!

                  1. 1

                    My last two interviews were just one short onsite session each. Both led to an offer. I turned down companies with more involved & time consuming process.

            1. 3

              I’m @mjn@icosahedron.website. I post some mix of tech/research stuff and miscellaneous my-daily-life sorts of stuff.

              1. 7

                Are FreeBSD jails remotely as usable as Docker for Linux? Last time I checked they seemed rather unusable.

                1. 3

                  In technical terms they’re just fine, in my semi-professional experience. What they lack is the ergonomics of Docker.

                  1. 5

                    I’m not very impressed with the ergonomics of docker, and it’s definitely not obvious to me that BSD jails are an inferior solution to it.

                    1. 5

                      Ok, so I’m a big fan of BSDs, so I’d be very interested if there’d be a nice (not necessarily identical, but similar) way to do the roughly the following things with jails:

                      vi Dockerfile # implement your container based on another containers
                      docker build . # build it
                      docker push https://<internal_storage>/money-maker:0.9 # push it to internal repo
                      ssh test_machine
                      docker run https://<internal_storage_server>/money-maker:0.9 # run the container on the test machine
                      
                      1. 5

                        The obvious equivalent I can think of is:

                        • Create a jail
                        • Set it up (whether manually or via a Dockerfile-equivalant shell script)
                        • Store a tar of its filesystem to https://<internal_storage>/money-maker:0.9
                        • Create a jail on the destination machine
                        • Untar the stored filesystem
                        • Start the jail

                        These steps aren’t integrated nicely the way they are with docker, but they are made of small, otherwise-useful parts which compose easily.

                        1. 4

                          Sure. How much work do you think needs to be done to get the benefits of Docker’s layer-based approach to containers? If your containers are based on each other, you get significant space savings that way.

                          1. 0

                            ZFS deduplicates stored blocks, so you would still get the space savings. You would still have to get it over the network, though.

                            1. 6

                              ZFS does not dedup by default, and deduping requires a lot of ram to the point that I’d not turn it on for performance reasons. I tried a 20TiB pool with/without, the speed was about 300k/s versus something closer to the underlying ssd’s performance. It was that bad, even after trying to tune the piss out of it.

                              Hardlinks would be faster at that point.

                              1. 3

                                No no no, ZFS dedup wastes some ridiculous amount of RAM. Do you use it IRL or are you just quoting a feature list?

                                1. 1

                                  I use it, bit not on anything big, just my home BAS.

                          2. 2

                            One option is to use a jail-management system built on top of the raw OS functionality. They tend to take an opinionated approach to how management/launching/etc. should work, and enforce a more fleshed-out container model. As a result they’re more ergonomic if what you want to do fits with their opinions. CBSD is probably the most full-featured one, and is actively maintained, but there are a bunch of others too. Some of them (like CBSD) do additional things like providing a unified interface for launching a container as either a jail or a bhyve VM.

                    1. 21

                      I used to work in academia, and this is an argument that I had many times. “Teaching programming” is really about teaching symbolic logic and algorithmic thinking, and any number of languages can do that without the baggage and complexity of C++. I think, if I was in a similar position again, I’d probably argue for Scheme and use The Little Schemer as the class text.

                      1. 10

                        This is called computational thinking. I’ve found the topic to be contentious in universities, where many people are exposed to programming for the first time. Idealists will want to focus on intangible, fundamental skills with languages that have a simple core, like scheme, while pragmatists will want to give students more marketable skills (e.g. python/java/matlab modeling). Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

                        Regardless, I think we can all agree C++ is indeed a terrible first language to learn.

                        1. 9

                          Ironically, if you’d asked me ten years ago I would’ve said Python. I suppose I’ve become more idealist over time: I think those intangible, fundamental skills are the necessary ingredients for a successful programmer. I’ve worked with a lot of people who “knew Python” but couldn’t think their way through a problem at all; I’ve had to whiteboard for someone why their contradictory boolean condition would never work. Logic and algorithms matter a lot.

                          1. 9

                            I think python is a nice compromise. The syntax and semantics are simple enough that you can focus on the fundamentals, and at the same time it gives a base for students to explore more practical aspects of they want.

                          2. 7

                            Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

                            Yeah, I feel like universities could do a better job at setting the stage for this stuff. They should explain why the “niche language” is being used, and help the students understand that this will give them a long term competitive advantage over people who have just been chasing the latest fads based on the whims of industry.

                            Then there is also the additional problem of industry pressuring universities into becoming job training institutions, rather than places for fostering far-looking, independent thinkers, with a deep understanding of theory and history. :/

                            1. 3

                              I’ve been thinking about this a bit lately, because I’m teaching an intro programming languages course in Spring ‘19 (not intro to programming, but a 2nd year course that’s supposed to survey programming paradigms and fundamental concepts). I have some scope to revise the curriculum, and want to balance giving a survey of what I think of as fundamentals with picking specific languages to do assignments in that students will perceive as relevant, and ideally can even put on their resumes as something they have intro-level experience in.

                              I think it might be getting easier than it has been in a while to square this circle though. For some language families at least, you can find a flavor that has some kind of modern relevance that students & employers will respect. Clojure is more mainstream than any Lisp has been in decades, for example. I may personally prefer CL or Scheme, but most of what I’d teach in those I can teach in Clojure. Or another one: I took a course that used SML in the early 2000s, and liked it, but it was very much not an “industry” language at the time. Nowadays ReasonML is from Facebook, so is hard to dismiss as purely ivory tower, and OCaml on a resume is something that increasingly gets respect. Even for things that haven’t quite been picked up in industry, there are modernish communities around some, e.g. Factor is an up-to-date take on stack languages.

                              1. 3

                                I one way you can look at it is: understanding how to analyse the syntax and semantics of programming languages can help you a great deal when learning new languages, and even in learning new frameworks (Rails, RSpec, Ember, React, NumPy, Regex, Query builders, etc. could all be seen as domain specific PLs embedded in a host language). Often they have weird behaviours, but it really helps to have a mental framework to quickly understand new language concepts.

                                Note that I wouldn’t recommend this as a beginner programming language course - indeed I’d probably go with TypeScript, because if all else fails they’ll have learned something that can work in many places, and sets them on the path of using types early on. From the teaching languages Pyret looks good too, but you’d have to prevent it from being rejected. But as soon as possible I think it’s important to get them onto something like Coursera’s Programming Languages course (which goes from SML -> Racket -> Ruby, and shows them how to pick up new languages quickly).

                            2. 7

                              I started college in 1998, and our intro CS class was in Scheme. At the time, I already had done BASIC, Pascal, and C++, and was (over)confident in all of them, and I hated doing Scheme. It was different, it was impractical, I saw no use in learning it. By my sophomore year I was telling everyone who would listen that we should just do intro in Perl, because you can do useful things in it!

                              Boy howdy, was I wrong, and not just about Perl. I didn’t appreciate it at the time, and I didn’t actually appreciate it until years later. It just sorta percolated up as, “Holy crap, this stuff is in my brain and it’s useful.”

                              1. 3

                                I hear this reasoning, about teaching tangible skills, but even one two or three quarters for Python is not enough for a job, at least it shouldn’t be. If it is, then employers are totally ok with extremely shallow knowledge.

                                1. 1

                                  I didn’t even realize I had read this a month ago, nevermind I had commented on it, before I wrote my own post on the topic. Subconscious motivations at its finest.

                              1. 5

                                These have been floating around FOR-EVER but I’m glad they keep cropping up. I see evidence of these constantly in just about every technical community I inhabit.

                                They were an eye opener for me at the time. Particularly #2 (accept me as I am) and #4 (transitive).

                                Grokking the fundamental falsehood of some of these deeply was definitely a step towards finally growing up in certain ways that I REALLY needed to (and had for a long time).

                                I also credit having successfully jettisoned #2 with being why at age 35 I finally started dating and met my wife :)

                                1. 5

                                  I recognize some of these patterns, but I don’t think I associate them with technical communities. Where I’ve run into them is in “cultural geek” communities, those organized around things like fandoms. This could be idiosyncratic based on which specific kinds of both communities I’ve run into though.

                                  1. 2

                                    I’ll take your word for it. In my case, the degree ov overlap between technical communities and various fandoms is extremely high.

                                    1. 1

                                      That’s interesting and believable too, which is why I added the caveat that it could well be idiosyncratic. I’ve definitely read about this kind of thing in my area, artificial intelligence, e.g. the old MIT hacker culture. I just haven’t encountered it in person, and it always felt like something that existed way before my time. Out of curiosity, what kinds of technical communities have you encountered where the overlap is high?

                                      The AI conferences I personally go to do have a handful of geeky people, but way more business/startup/government/military/professor types. A bunch of these factors pretty clearly don’t apply as far as I can tell, for better or worse. For example, socially awkward and/or unhygienic people are pretty much jetissoned without a second thought if someone thinks they might interfere with funding.

                                      1. 2

                                        So, I want to be sure to constrain this properly.

                                        I came into the Boston technical scene in the early 1990s. At that time, the overlap with the Boston science fiction fandom community was HUGE as it was for the Polyamory and BDSM communities (of which I’ve never been a part. Vanilla and proud yo :)

                                        In fact, I pretty much got my start in the Boston tech scene by showing up at a science fiction fandom oriented group house in the middle of a blizzard and passing out my resume to anyone who’d talk to me :) I ended up living in that group house for a time.

                                        I’m fairly sure this isn’t representative of the here and now. Our industry has become a very VERY different place several times over since then (mostly for the better) and I suspect that younger folks are being drawn from a much more diverse background of interests.

                                        1. 1

                                          Hah interesting, I know some people who I think had a similar kind of experience in the SF Bay Area in the ’90s, living in group houses to get broadband internet and such. I got into tech in the late ‘90s in suburban Houston, which might have had a geek scene, but if so I didn’t know about it. The tech scene I was exposed to was much more “professional engineering” oriented, anchored by people who worked at NASA or NASA contractors (plus some people doing tech at oil companies).

                                    2. 1

                                      I’m not found that to be the case, even here in the Lobsters community in its forum and chat forms.

                                    3. 2

                                      I’m curious how #2 motivated you to start dating. Were you just generally more receptive of criticism from friends, and if so, how does that translate to wanting to start dating?

                                      1. 4

                                        Not so much about wanting to start dating, but being willing to make the changes necessary to be perceived as attractive.

                                        “Friends accept me as I am”.

                                        Who cares if I have a giant sloppy beard, dress in sweat pants and faded T-shirts all the time, and generally take PRIDE in not giving two shits about my personal appearance? My TRUE friends will accept me for who I am and see past all that.

                                        Except that this is the real world. How you look DOES matter. Shave the beard, lose a few pounds, buy some decent clothing and it’s a whole different ballgame.

                                        1. 1

                                          I definitely agree with what you’re saying, but it reminds me of some definitions from Douglas Coupland’s novel Generation X :

                                          anti-victim device (AVD) - a small fashion accessory worn on an otherwise conservative outfit which announces to the world that one still has a spark of individuality burning inside: 1940s retro ties and earrings (on men), feminist buttons, noserings (on women), and the now almost completely extinct teeny weeny “rattail” haircut (both sexes).

                                          … and:

                                          personality tithe - a price paid for becoming a couple; previously amusing human beings become boring: “Thanks for inviting us, but Noreen and I are going to look at flatware catalogs tonight. Afterward we’re going to watch the shopping channel.”

                                          https://en.wikiquote.org/wiki/Generation_X:_Tales_for_an_Accelerated_Culture

                                          Some parts of a given personality are stupid and need to be shorn so the person can have a more interesting life. It’s easy to lionize the idea that someone can be Good Enough, or, in somewhat different subcultures, Cool Enough, that you never have to compromise on those things, but even if you luck into a job where that works, it doesn’t and can never work in a healthy personal relationship.

                                          1. 4

                                            Sounds like I need to read that book!

                                            I don’t personally see it as compromise.

                                            The truth is that my self confidence was in the shitter at that time. My personal appearance was just one outward manifestation of that.

                                            Recognizing that I needed to make changes if I wanted to meet someone and be less lonely was a first step towards letting go of some of that baggage.

                                    1. 3

                                      I wish the article were a bit more substantive.

                                      They touch on type inference. Does anyone have some refs for how to do this in Lisp?

                                      1. 6

                                        They touch on type inference. Does anyone have some refs for how to do this in Lisp?

                                        Henry Baker has a paper about it: The Nimble Type Inferencer for Common Lisp-84

                                        http://home.pipeline.com/~hbaker1/TInference.html

                                        1. 4

                                          You don’t do this, the compiler does it for you. Inferring it from the code, with or without the aid of type declarations by programmer.

                                          The article has the example of declaring the types, which allows compiler to infer tight, specialized code.

                                          1. 2

                                            Sure, agreed. But in my case, I’m implementing the compiler. So I was hoping for some refs on how to do type inference.

                                            1. 12

                                              SBCL implements it as type propagation using dataflow analysis. Types in some places are known from explicit declarations, constant literals, known functions, etc., and this information is propagated in a pass named “constraint propagation”, to infer the types of other expressions when possible.

                                              I’m not sure whether there’s proper documentation anywhere, but this post on some of the type propagator’s weaknesses starts with a summary of how it works [1].

                                              “Gradual typing” has become a trend outside the Lisp world lately, and I believe those systems do something vaguely similar to propagate types. That might be another place to find implementation hints.

                                              [1] Note when reading this post that “Python” is the name of SBCL’s compiler; the name predates Guido van Rossum’s language. The reason the compiler has a separate name in the first place is that SBCL’s predecessor, CMUCL, had an interpreter, a bytecode compiler, and a native-code compiler, and Python was the name of the native-code compiler.

                                              1. 2

                                                SBCL implements it as type propagation using dataflow analysis. Types in some places are known from explicit declarations, constant literals, known functions, etc., and this information is propagated in a pass named “constraint propagation”, to infer the types of other expressions when possible.

                                                Ah! Interesting! This is what I was planning to do with my toy lisp :) It might be worth for scholary searches, I heard it first when using Erlang under the name of Succession Typing. I could be completely remembering, but I think Dialyzer uses this method of program typing.

                                                1. 2

                                                  I don’t think this is quite the same thing as Dialyzer’s Success Typing, though there might be some overlap. Dialyzer’s goals are very different in that it’s only concerned with correctness and not performance. But it might use some similar methods.

                                              2. 2

                                                This may help, it’s an overview of FB’s Flow system for JS https://www.youtube.com/watch?v=VEaDsKyDxkY

                                          1. 5

                                            Only tangentially on topic: I hadn’t noticed there’s a Talos II in a more hobbyist affordable price range now. Last time I looked the complete-system prices were $7k or so, and that’s still what the flagship one goes for, but now they have a $2k “Entry-Level Developer System” with one 4-core/16-thread CPU. For $2.5k you can bump it to 16G RAM and 8-core/32-thread. Still not cheap for those specs compared to x86 unless you have pretty specific POWER9-optimized stuff, but it’s at least not an absurd amount of money.

                                            1. 5

                                              Rumour is that there’ll be a sub-$1k option along the lines of a Mac Mini at some point, too.

                                            1. 9

                                              This strikes me as a step-by-step rediscovery of “Checked Exceptions”, but with an even more awkward encoding.

                                              1. 10

                                                Checked exceptions are underrated.

                                                1. 5

                                                  There’s a lobste.rs discussion from 6 months ago that’s somewhat relevant on that (but with Rust rather than Haskell).

                                                1. 14

                                                  There are kind of two things here. The high-level APIs for machine learning are typically in Python or R. The actual implementations of the data structures and algorithms mostly are in C, C++, or Fortran, though, with Python/R bindings. The usual reasons given for this split approach are that pure Python/R libraries are too slow, but using C/C++/Fortran libraries directly is too tedious and low-level.

                                                  1. 4

                                                    Judging from what I’ve seen in other domains, that depends. There’s no doubt that in a proper ML library, most of the fundamental parts are heavily optimized by, e.g., implementing them in a lower-level language and using FFI bindings. However, anything built on top of them is not. And that can include additional performance-sensitive components, written outside the performance haven of C/C++/Fortran. If these are useful enough for other devs to make use of them, inefficiency can spread through the ecosystem.

                                                    No matter how much performance boilerplate you handle in a library, developers will always have their own data structures in mind. But perhaps in ML, people use mostly pre-made components, to a degree such that most of the performance-sensitive logic really does happen in more optimized code. Do you find this to be the case?

                                                    1. 3

                                                      people use mostly pre-made components

                                                      Varies by area, but I think this is true in many of them. This is probably best established in the R world, since it’s older (going back decades to its predecessor, Bell Labs S). Far more people use the system as data analysts than implement new statistical methods. In fact many S/R users traditionally didn’t see themselves as programming, but as using an interactive console, which is why it has features like saving/reloading the current workspace supporting a purely interactive usage without ever explicitly writing a program/script in a text editor. (It’s gotten a little more common to write explicit scripts lately though, with the reproducible-science push.)

                                                      Python/ML people are more likely to see Python as a programming language rather than just a data-analysis console, but if you look at what most people use it for, it’s the same kind of workflow of loading and cleaning data, choosing a model, setting model parameters, exporting or plotting results. The heavy lifting is done by something like TensorFlow which your average ML practitioner is not going to be contributing code to (even most ML researchers won’t be).

                                                  1. 3

                                                    I found this very interesting. I’ve noticed that in the media “A.I.” seems to often refer to just deep learning, so its failures will definitely affect the public perception of the whole field.

                                                    1. 2

                                                      That’s one of the sad (or funny) parts of the “AI winter” phenomenon: each cycle, there’s a different technology called “AI”. But let’s be honest, public perception owes more to Hollywood movies than scientific consensus. The interesting question is who’s handing out money, and what do they think will succeed or fail. Recently it’s been difficult to get funding for anything but deep neural nets.

                                                      1. 1

                                                        Recently it’s been difficult to get funding for anything but deep neural nets.

                                                        For venture-capital funding, yes, but public funding bodies aren’t putting all their eggs in the deep-learning basket. The NSF, at least, continues to fund a pretty wide range of methods, including logic-based AI, symbolic planning, Bayesian methods, etc.

                                                    1. 2

                                                      How did it come to be like this? I don’t imagine this has anything to do with efficiency, judging by the amount of labour (on the employer’s end) exerted to make candidates jump through hoops.

                                                      1. 8

                                                        Nobody wants to take a risk and get blamed for a bad hire, so they set up more and more process. It’s like sifting for gold, except you have like twenty pans and you throw away everything that doesn’t make it through any of the sifters without looking.

                                                        1. 3

                                                          That explanation seems plausible, but then I wonder, why is the process so much more heavyweight in tech than just about any other field, including other STEM fields? In sheer number of hours of interviewing that it takes to get a job, counting all the phone screens, take-home assignments, in-person interviews, etc., tech is way out of sync with norms elsewhere. A typical hiring process is any other STEM field is a resume screen, followed by one phone screen (typically an hour), followed by an on-site interview that can last somewhere between a few hours and a full day.

                                                          1. 8

                                                            Survivorship bias could be why. The ones perpetuating this broken process are those who sailed through it.

                                                            There’s also a lot of talent floating around, and the average company won’t be screwed by an average hire. So even if you miss out on that quirky dev with no social skills but the ability to conjure up a regex interpreter solely from memory, it doesn’t really matter to them.

                                                            It should matter to startups, though, because hiring average devs means you’ll fail.

                                                            1. 3

                                                              Depends on the startup; until you have product-market fit, you don’t need amazing engineers so much as you need people who can churn out prototypes fast.

                                                            2. 4

                                                              It might be partly due to the volume of applicants. With tech you have:

                                                              1. Massive concentration of talent (e.g silicon valley)
                                                              2. Remote work

                                                              For those reasons you can often get hundreds of applicants to a posting. Other STEM disciplines don’t support working remotely, and in some cases (think civil engineering) need their engineers to be physically on-site. I’d wager they tend to be much more dispersed around the country and companies can only draw from the local talent pool.

                                                              1. 3

                                                                I applied to a remote London based, three-person not-a-startup. I did the homework and got among the 50 or so people they interviewed on phone. They told they got over 2000 applications.

                                                              2. 4

                                                                Particularly in other STEM fields it’s pretty common to have more rigorous formal education requirements as part of the hiring bar (either explicitly or by convention.) Software development has always been somewhat more open to those from other backgrounds, but the flip side to that is that there seems to be a desire to set a higher performance/skills bar (or at least look like you are) as a result. There are potentially pros and cons to both.

                                                                I’d also wonder, particularly around the online tests/challenges/screenings/etc…, whether this is a result of tech people trying to come up with a tech solution to scale hiring the same way you’d approach scaling a technological system, and the resulting expansion in complexity.

                                                            3. 4

                                                              Hiring is hard, and a lot of work, and not something most engineers will willfully dive into. Therefore, at most companies, as much as possible of the hiring process gets farmed out to HR / management. And they do the best job they can, given their lack of domain knowledge. Unsurprisingly, they also favor potential employees that they think will be “good” based on their ability to sit, stay, heel, and jump through hoops. Fetch. Good boy. Who wants a cookie. ;)

                                                              Another take: Mistakes really, really, suck. And if you just add more analysis and testing to a hiring process you’re more likely to spot a problem in a candidate.

                                                              1. 2

                                                                I think mistakes are a big part of it. Software work is highly leveraged: what you write might run hundreds, thousands or millions of times per day. Being a little off can have big downstream consequences.

                                                              2. 4

                                                                I think it’s partly because there’s no training for it in most jobs, it’s very different to expertise in software, it’s very unclear what best practices are (if there are any), and for a lot of people it’s a time suck out of their day, when they’ve already got lots of work to do.

                                                                So you end up with these completely ad-hoc processes, wildly different from company to company (or differing even person to person during the interview), without anyone necessarily responsible for putting a system in place and getting it right.

                                                                Not to mention HR incentives may not align (points / money for getting someone hired) with engineering, and then you’ve got engineers who use the interview as a way to show off their own smarts, or who ask irrelevant questions (because though you all do code review, no-one does interview question review), or who got the interview dumped on their plate at the last minute because someone else is putting out a dumpster fire, and they’ve never heard of you or seen your resume before they walk into the room…

                                                                And decision making is ad-hoc, and the sync-up session after the interview gets put off for a couple of days because there’s a VP who wants to be on the call but they’re tied up in meetings, and in the meantime the candidate has an interview with another company so you’ve just moved forward with booking the onsite anyway…

                                                                So many reasons :)

                                                                1. 2

                                                                  It’s all marketing.

                                                                  I don’t think I would have taken any of my jobs if the recruiters were like “we’re not going to bother interviewing you because all we have is monkey work, when can you start?”, even though in hindsight that would have been totally adequate.

                                                                  So companies play hard to get and pretend 99% of their applicants are too bad to do the jobs on offer, when the reality is closer to the opposite.

                                                                  1. 1

                                                                    <[apple] insert [google] company [facebook] here [microsoft]> only hires the best.

                                                                  1. 2

                                                                    Our research group is going “on the road” to work in a public space as scientists-in-residence at the Eden Project this week. They have a new thing where there is an art space and a science lab, with rotating groups there each week. Should be interesting, even though as computing researchers we don’t have the kind of impressive looking “science equipment” to bring along that people like chemists have. We’re bringing along a 3d printer and some small robots though so it isn’t just a few researchers coding on laptops, even though that would be a more authentic representation of what our lab usually looks like.

                                                                    1. 1

                                                                      Those are some beautiful images!

                                                                    1. 1

                                                                      I’m glad to received a link to my own article, but I do disagree somewhat with that is said in this one.

                                                                      The specific example of cron/NFS is in fact a hard dependency: cron runs reboot tasks when it starts, and if they need NFS mounts, then those mounts should be a hard requirement of cron, “ordering” is not sufficient.

                                                                      The implied issue is that cron doesn’t need the NFS mounts once it’s run those tasks, so the dependency “becomes false” at that point. If I understand the argument correctly, it is: seeing as “the system as a whole wants both”, you could use a startup ordering to avoid leaving a lingering dependency once the @reboot jobs have run while still ensuring that NFS mounts are available before cron starts. This is true, but it would be fragile and racy. For instance, there would be nothing to prevent the NFS mounts, even with the co-operation of the service manager, being unmounted just after crond begins execution, but before it has even started (or possibly when it is midway through) running the @reboot tasks.

                                                                      In my eyes there are two ways to solve it properly: separate cron boot tasks from regular cron so that you can run them separately (that would mean changing cron or using some other tool), or having the cron boot tasks work by starting short-running services (which can then list NFS mounts as a dependency). This latter requires non-priviliged users be allowed to start services though, and that’s opening a can of worms. I feel that ultimately the example just illustrates the problems inherent in cron’s @reboot mechanism.

                                                                      (Not to mention that there’s a pre-existing problem: for cron, “reboot” just means “cron started”. If you start and stop cron, those reboot tasks will all run again…)

                                                                      1. 2

                                                                        Belatedly (as the author of the linked-to article): In our environment, NFS mounts are a hard dependency of those specific @reboot cron jobs, but not of cron in general. In fact we specifically want cron to run even if NFS mounts are not there, because one of the system cron jobs is an NFS mount updater that will, as a side effect, keep retrying NFS mounts that didn’t work the first time. Unfortunately there is no good way to express this in current init systems that I know about and @reboot cron jobs are the best way we could come up with to allow users to start their own services on boot without having to involve sysadmins to add, modify, and remove them.

                                                                        (With sufficient determination we could write our own service for this which people could register with and modify, and in that service we could get all of the dependencies right. But we’re reluctant to write local software, such a service would clearly be security sensitive, and @reboot works well enough in our environment.)

                                                                        1. 1

                                                                          But it’s not a dependency of cron, it’s a dependency of these particular tasks. Cron the package that contains the service definition has no idea about what you put into your crontab.

                                                                          Yes, it’s a problem in cron. This is why there’s movement towards just dropping cron in favor of integrating task scheduling into service managers. Apple launchd was probably first, systemd of course has timers too, and the “most Unix” solution is in Void Linux (and runit-faster for FreeBSD now): snooze as runit services. In all these cases, each scheduled task can have its own dependencies.

                                                                          (Of course the boot tasks then are just short-running services as you described.)

                                                                          1. 1

                                                                            But it’s not a dependency of cron, it’s a dependency of these particular tasks

                                                                            Agreed, but if you’re the sysadmin and know that cron jobs are using some service/unit, then you’d better make sure that the cron service is configured with an appropriate dependency. At least, that’s how I view it. Without knowing more about the particular system in question, I’m not sure we can say much more about how it should be configured - I agree that cron isn’t perfect, particularly for “on boot” tasks, but at least it’s a secure way of allowing unprivileged users to set up their own time-based tasks. (I guess it’s an open question whether that should really be allowed anyway).

                                                                          2. 1

                                                                            I was also confused by that, but from the discussion in the comments, I think the reason they don’t want it to be a hard dependency is that, in their setup, some machines typically have NFS configured and some don’t. In the case where the machine would start NFS anyway, they want an ordering dependency so it starts before cron. But if NFS wasn’t otherwise configured to start on that machine, then cron should start up without trying to start NFS.

                                                                            1. 1

                                                                              Yes, that accords with the comments below the post:

                                                                              On machines with NFS mounts and Apache running, we want Apache to start after the NFS mounts; however, we don’t want either NFS mounts or Apache to start the other unless they’re explicitly enabled. If we don’t want to have to customize dependencies on a per-machine basis, this must be a before/after relationship because neither service implies the other

                                                                              The problem is “don’t want to have to customize dependencies” is essentially saying “we are ok with the dependencies being incomplete on some machines if it means we can have the same dependency configurations on all machines”. That seems like the wrong approach to me; you should just bite the bullet and configure your machines correctly; you’ve already got to explicitly enable the services you do want on each machine, anyway.

                                                                              1. 1

                                                                                This gets into the philosophy of fleet management. As someone who manages a relatively small fleet by current standards (we only have a hundred machines or so), my view is that the less you have to do and remember for specific combinations of configurations, the better; as much as possible you want to be able to treat individual configuration options as independent and avoid potential combinatorial explosions of combinations. So it’s much better to be able to set a before/after relationship once, globally, than to remember that on a machine that both has NFS mounts and Apache that you need a special additional customization. Pragmatically you’re much more likely to forget that such special cases exist and thus set up machines with things missing (and such missing cases may be hard to debug, since they can work most of the time or have only tiny symptoms when things go wrong).

                                                                                (One way to think of it is that it is a building blocks approach versus a custom building approach. Building blocks is easier.)

                                                                          1. 21

                                                                            So I think I’m a bit late for the big go and rust and garbage collection and borrow checker discussion, but it took me a while to digest, and came up with the following (personal) summary.

                                                                            Determining when I’m done with a block of memory seems like something a computer could be good at. It’s fairly tedious and error prone to do by hand, but computers are good at monotonous stuff like that. Hence, garbage collection.

                                                                            Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                                            But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye.

                                                                            1. 18

                                                                              But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye

                                                                              I’m in the middle of editing an essay on this! Long story short, proving an arbitrary code property is undecidable, and almost all the decidable cases are in EXPTIME or worse.

                                                                              1. 10

                                                                                I’m kinda familiar with undecidable problems, though with fading rigor these days, but the thing is, undecidable problems are undecidable for humans too. The impossible task becomes no less impossible by making me do it!

                                                                                I realize it’s a pretty big ask, but the current state of the art seems to be redefine the problem, rewrite the program, find a way to make it “easy”. It feels like asking a lot from me.

                                                                                1. 10

                                                                                  The problem is undecidable (or very expensive to decide) in the most general case; what Rust does is solve it in a more limited case. You just have to prove that your usage fits into this more limited case, hence the pain in the ass. Humans can solve more general cases of the problem than Rust can, because they have more information about the problem. Things like “I only ever call function B with inputs produced from function A, function A can only produce valid inputs, so function B doesn’t have to do any input validation”. Making these proofs without computer assistance is no less of a pain in the ass. (Good languages make it easy to enforce these proofs automatically at compile or run time, good optimizers remove redundant runtime checks.)

                                                                                  Even garbage collectors do this; their safety guarantees are a subset of what a perfect solution would provide.

                                                                                  1. 3

                                                                                    “Humans have more information about the problem”

                                                                                    And this is why a conservative borrower checker is ultimately the best. It can be super optimal, and not step on your toes. It’s up to the human to adjust the lifetime of memory because only the human knows what it wants.

                                                                                    I AM NOT A ROBOT BEEP BOOP

                                                                                  2. 3

                                                                                    Humans have a huge advantage over the compiler here though. If they can’t figure out whether a program works or not, they can change it (with the understanding gained by thinking about it) until they are sure it does. The compiler can’t (or shouldn’t) go making large architectural changes to your code. If the compiler tried it’s hardest to be as smart as possible about memory, the result would be that when it says “I give up, the code needs to change” the human who can change the code is going to have a very hard time understanding why and what they need to change (since they haven’t been thinking about the problem).

                                                                                    Instead, what Rust does is apply as intelligent a set of rules they could that produce consistent understandable results for the human. So the compiler can say “I give up, here’s why”. And the human can say “I know how the compiler will work, it will accept this this time” instead of flailing about trying to convince the compiler it works.

                                                                                    1. 1

                                                                                      I realize it’s a pretty big ask

                                                                                      I’ve been hearing this phrase lately “big ask” from business people generally, seems very odd to me. Is it new or have I just missed it up to now?

                                                                                      1. 2

                                                                                        I’ve been hearing it from “business people” for a couple years at least, I assume it’s just diffusing out slowly to the rest of society.

                                                                                        The new one I’m hearing along these lines is “learnings”. I think people just think it makes them sound smart if they use different words.

                                                                                        1. 1

                                                                                          A “learning”, as a noun, is attested at least as far back as the early 1900s, FYI.

                                                                                          1. 0

                                                                                            This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun. You can argue about the merits of prescriptivism all you like, you can have whatever philosophical discussion you like as to whether it’s valid to say that something is ‘incorrect English’, but ‘someone used it in that way X hundred years ago’ does not justify anything.

                                                                                            1. 2

                                                                                              This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun.

                                                                                              It wasn’t “one person using it incorrectly” that’s not even remotely how attestation works in linguistics. And of course, of course it is very much a noun. What precisely, man, do you think a gerund is? We have learning curves, learning processes, learning centres. We quote Pope to one another when we say that “a little learning is a dangerous thing”.

                                                                                              To take the position that gerunds aren’t nouns and cannot be pluralized requires objecting to such fluent Englishisms as “the paintings on the wall”, “partings are such sweet sorrow”, “I’ve had three helpings of soup”

                                                                                              1. 0

                                                                                                ‘Painting’ is the process of painting. You can’t pluralise it. It’s also a (true) noun, the product of doing some painting. There it obviously can be pluralised. But ‘the paintings we did of the house kept improving the sheen of the walls’ is not valid English. They’re different words.

                                                                                                1. 2

                                                                                                  LMAO man, how do you think Painting became a “true” noun? It’s just a gerund being used as a noun that you’re accustomed to. One painted portraits, landscapes, still lifes, studies, etc. To group all these things together as “paintings” was an instance of the exact same linguistic phenomenon that gives us the idea that one learns learnings.

                                                                                                  You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                                                                                  1. 0

                                                                                                    You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                                                                                    No, I’m not. This has literally nothing to do with linguistics. That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are. People using language wrong are using it wrong in the eyes of others, which makes it wrong.

                                                                                                    1. 1

                                                                                                      That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are.

                                                                                                      Well, I hate to break it to you, but plenty of people say learnings is a word, like all of the people you were complaining use it as a word.

                                                                                                      1. 0

                                                                                                        There are lots of people that write ‘should of’ when they mean ‘should’ve’. That doesn’t make them rightt.

                                                                                                        1. 1

                                                                                                          Yes and OK is an acronym for Oll Korrect, anyone using it as a phrase is not OK.

                                                                                                          1. 0

                                                                                                            OK has unknown etymology. And acronyms are in no way comparable to simply incorrect grammar.

                                                                                                            1. 1

                                                                                                              Actually it is known. Most etymologists agree that it came from Boston in 1839 originating in a satirical piece on grammar. This was responding to people who insist that English must follow some strict unwavering set of laws as though it were a kind of formal language. OK is an acronym, and it stands for Oll Korrect, and it was literally invented to make pedants upset. Certain people were debating the use of acronyms in common speech, and to lay it on extra thick the author purposefully misspelled All Correct. The word was quickly adopted because pedantry is pretty unpopular.

                                                                                                              1. 1

                                                                                                                What I said is that there is what is accepted as valid and what is not. Nobody educated thinks that ‘should of’ is valid. It’s a misspelling of ‘should’ve’. Nobody thinks ‘shuold’ is a valid spelling of ‘should’ either. Is this really a debate you want to have?

                                                                                                                1. 1

                                                                                                                  I was (mostly) trying to be playful while also trying to encourage you to be a little less litigious about how people shuold and shuold not use words.

                                                                                                                  Genuinely sorry for making you actually upset though, I was just trying to poke fun a little for getting a bit too serious at someone over smol beans, and I was not trying to make you viscerally angry.

                                                                                                                  I also resent the attitude that someone’s grammatical or vocabulary knowledge of English represents an “education”.

                                                                                        2. 1

                                                                                          It seems like in the last 3 years all the execs at my company started phrasing everything as “The ask is…” I think they are trying to highlight that you have input (you can answer an ask with no) vs an order.

                                                                                          In practice, of course, many “asks” are orders.

                                                                                          1. 4

                                                                                            Sure, but we already have a word for that, it’s “request”.

                                                                                            1. 4

                                                                                              Sure, but the Great Nouning of Verbs in English has been an ongoing process for ages and continues apace. “An ask” is just a more recent product of the process that’s given us a poker player’s “tells”, a corporation’s “yearly spend”, and the “disconnect” between two parties’ understandings.

                                                                                              All of those nouned verbs have or had perfectly good non-nominalized verb nouns, at one point or another in history.

                                                                                              1. 1

                                                                                                One that really upsets a friend of mine is using ‘invite’ as a noun.

                                                                                          2. 1

                                                                                            Newly popular? MW quotes this usage and says Britishism.

                                                                                            https://www.merriam-webster.com/dictionary/ask

                                                                                            They don’t date the sample, but I found it’s from a 2008 movie review.

                                                                                            https://www.spectator.co.uk/2008/10/cold-comfort/

                                                                                            So at least that old.

                                                                                        3. 3

                                                                                          You no doubt know this, but the undecidable stuff mostly becomes decidable if you’re willing to accept a finite limit on addressable memory, which anyone compiling for, say, x86 or x86_64 is already willing to do. So imo it’s the intractability rather than undecidability that’s the real problem.

                                                                                          1. 1

                                                                                            It becomes decidable by giving us an upper bound on the number of steps the program can take, so should require us to calculate the LBA equivalent of a very large BB. I’d call that “effectively” undecidable, which seems like it would be “worse” than intractable.

                                                                                            1. 2

                                                                                              I agree it’s, let’s say, “very” intractable to make the most general use of a memory bound to verify program properties. But the reason it doesn’t seem like a purely pedantic distinction to me is that once you make a restriction like “64-bit pointers”, you do open up a bunch of techniques for finite solving, some of which are actually usable in practice to prove properties that would be undecidable without the finite-pointer restriction. If you just applied Rice’s theorem and called verifying those properties undecidable, it would skip over the whole class of things that can be decided by a modern SMT solver in the 32-bit/64-bit case. Granted, most still can’t be, but that’s why the boundary that interests me more nowadays is the “SMT can solve this” vs. “SMT can’t solve this” one rather than the CS-theory sense of decidable/undecidable.

                                                                                        4. 6

                                                                                          Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done.

                                                                                          It’s really hard. The main tool for that is separation logic. Manually doing it is harder than borrow-checking stuff. There are people developing solvers to automate such analyses. Example. It’s possible what you want will come out of that. I think there will still be restrictions on coding style to ease analyses.

                                                                                          1. 3

                                                                                            In my experience, automated proof generators are very leaky abstractions. You have to know their search methods in detail, and present your hypotheses in a favorable way for those methods. It can look very clean, but it can mean that seemingly easy changes turn out to be frustrated by the methods’ limitations.

                                                                                            1. 4

                                                                                              I’m totally with you on this. Rust very much feels like an intermediate step and I don’t know why they didn’t take it to it’s not-necessarily-obvious conclusion.

                                                                                              1. 5

                                                                                                In my personal opinion, it might be just that we’re happy that we can actually get to this intermediate point (of Rust) reliably enough, but have no idea yet how to get to the further point (conclusion). So they took it where they could, and left the subsequent part as an excercise for the reader… I mean, to be explored by future generations of programmers, hopefully.

                                                                                                1. 4

                                                                                                  We have the technology, sort of. Total program analysis is really expensive though, and the workflow is still “edit some code” -> “compile on a laptop” -> repeat. Maybe if we built a gc’ed language that had a mode where you push your program to a long running job on a compute cluster to figure out all the memory proofs.

                                                                                                  This would be especially cool if incrementals could be cached.

                                                                                                  1. 4

                                                                                                    I’ve recommended that before. There’s millions being invested into SMT/SAT solvers for common bugs that might make that happen, too. Gotta wait for the tooling to catch up. My interim recommendation was a low-false-positive, static-analysis tool like RV-Match to be used on everything in the fast path. Anything that passes is done no GC. Anything that hangs or fails is GC’d. Same with automated proofs to eliminate safety checks. If it passes, remove that check if that’s what pass allows. If it fails, maybe it’s safe or maybe tool is too dumb. Keep the check. Might not even need cluster given number of cores in workstations/servers and efficiency improvements in tools.

                                                                                                  2. 4

                                                                                                    I think it’s because there’s essentially no chance that a random piece of code will be provable in such a way. Rust encourages, actually to the point of forcing, the programmer to reason about lifetimes and ownership along with other aspects of the type as they’re constructing the program.

                                                                                                    I think there may be a long term evolution as tools get better: the languages checks the proofs (which, in my dream, can be both types and more advanced proofs, say that unsafe blocks actually respect safety), and IDE’s provide lots of help in producing them.

                                                                                                    1. 2

                                                                                                      there’s essentially no chance that a random piece of code will be provable in such a way

                                                                                                      There must be some chance; rust is already proving memory safety.

                                                                                                      Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                      1. 17

                                                                                                        Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                        This is a misconception. The Rust compiler does not see anything beyond the function boundary. That makes lifetime checking efficient. Basically, when compiling a function, the compiler makes an reasonable assumption about how input and output references are connected (the assumption is “they are connected”, also known as “lifetime elision”). This is an assumption communicated the outside world. If this assumption is wrong, you need to annotate lifetimes.

                                                                                                        When compiling, the compiler will check if the assumption holds for the function body. So, for every function call, it will check if the the signature holds (lifetimes are part of the function signature).

                                                                                                        Note that functions with different lifetime annotations taking the same data might differ in their behaviour. It also isn’t always obvious to the compiler whether you want references to be bound together or not and that situation might be ambigous.

                                                                                                        The benefit of this model is that functions only need to be rechecked/compiled when they actually change, not some other code somewhere else in the program. It’s very predictable and errors are local to the function.

                                                                                                        1. 2

                                                                                                          I’ve been waiting for you @skade.

                                                                                                          1. 2

                                                                                                            Note that functions with different lifetime annotations taking the same data might differ in their behaviour.

                                                                                                            I wrote this late at night and have some errata here: they might differ in their behaviour wrt. lifetime checking. Lifetimes have no impact on the runtime, an annotation might only prove something safe that the compiler previously didn’t see as safe.

                                                                                                          2. 4

                                                                                                            Maybe I’m misunderstanding. I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added. (In the context of Rust, I would consider “annotation” to include choosing between &, &mut, and by-move, as well as adding .clone() when needed, especially for refcount types, and of course adding explicit lifetimes in cases that go beyond the present lifetime elision rules, which are actually pretty good). My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious. There’s a lot of experience around this for analyses where the consequence of failure is performance loss due to not being able to do an optimization, or false positives in static analysis tools.

                                                                                                            The main point I’m making here is that, by requiring the programmer to actually provide the types, there’s more work, but the failures are a lot less mysterious. Overall I think that’s a good tradeoff, especially with the present state of analysis tools.

                                                                                                            1. 1

                                                                                                              I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added.

                                                                                                              I’ll agree with that definition

                                                                                                              My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious.

                                                                                                              This is where I feel we disagree. I feel like you’re assuming that if we make lifetimes optional that we would for some reason also lose the type system. That was not my assumption at all. I assumed the programmer would still pick their own types. With that in mind, If this theoretical compiler could prove memory safety using the developer provided types and the inferred ownership, why would it still fail a lot?

                                                                                                              where the consequence of failure is performance loss due to not being able to do an optimization

                                                                                                              That’s totally understandable. I assume like any compiler, it would eventually get better at this. I also assume lifetimes become an optional piece of the program as well. Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                                                                              1. 3

                                                                                                                Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                                                                                That’s what Rust does. And many improvements to Rust focus on increasing the number of lifetime patterns the compiler can recognize and handle automatically.

                                                                                                                You don’t have to annotate everything for the compiler. You write code in patterns the compiler understands, and annotate things it doesn’t. So Rust has gotten easier and easier to write as the compiler gets smarter and smarter. It requires fewer and fewer annotations / unsafe blocks / etc as the compiler authors discover how to prove and compile more things safely.

                                                                                                            2. 4

                                                                                                              Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                              I wondered this at first, but inferring the lifetimes (among other issues) has some funky consequences w.r.t. encapsulation. Typically we expect a call to a function to continue to compile as long as the function signature remains unchanged, but if we infer the lifetimes instead of making them an explicit part of the signature, subtle changes to a function’s implementation can lead to new lifetime restrictions being inferred, which will compile fine for you but invisibly break all of your downstream callers.

                                                                                                              When the lifetimes are an explicit part of the function signature, the compiler stops you from compiling until you either fix your implementation to conform to your public lifetime contract, or change your declared lifetimes (and, presumably, since you’ve been made conscious of the breakage in this scenario, notify your downstream and bump your semver).

                                                                                                              It’s basically the same reason that you don’t want to infer the types of function arguments from how they’re used inside a function – making it easy for you to invisibly breaking your contract with the outside world is bad.

                                                                                                              1. 3

                                                                                                                I think this is the most important point here. Types are contracts, and contracts can specify far more than just int vs string. Complexity, linearity, parametricity, side-effects, etc. are all a part of the contract and the more of it we can get the compiler to enforce the better.

                                                                                                        2. 1

                                                                                                          Which is fine, until you have time or memory constraints that are not easily met by the tracing GC, which is all software of sufficient scale or complexity. At that point, you end up with half-assed and painful to debug/optimize manual memory management in the form of pools, ect.

                                                                                                          1. 1

                                                                                                            Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                                                                            Oh I wish that were how Rust worked. But it isn’t. A variant of Rust where you could actually prove things about your programme would be wonderful. Unfortunately, in Rust, you instead just have ‘unsafe’, which means ‘trust me’.

                                                                                                          1. 2

                                                                                                            I use mu4e in Emacs. I used to use Gnus, but it’s slow and Emacs’s lack of concurrency causes it to freeze the UI when working. Mu4e offloads querying, etc. to a separate commandline tool, so Emacs remains responsive.

                                                                                                            1. 1

                                                                                                              Mu4e for me as well. I wrote something about my setup a few years ago, and it’s still more or less the same.

                                                                                                              1. 4

                                                                                                                what projects are built on libssh?

                                                                                                                1. 4

                                                                                                                  GitHub uses it but says they do so in a way that didn’t expose this vulnerability. There’s some discussion in an Ars Technica article that suggests using libssh in server mode (vs. the client-side library) is uncommon.