1. 1

    Designing languages is a vanity activity, no research involved (although there lots of claims about power, usability get made, supported by ego and bluster).

    1. 2

      You’re right, of course. At least, more than half right in my personal estimation, and I’ve spent some time wandering in the PL research mines. The places where you’re wrong are the really interesting places, but they are small islands in a blustery sea of dubious, overstated, generally uncontested claims. Bad money drives out good, as they say. Or, bathwater drives out babies? Something like that.

      But your definition of “research” is so narrow that it’s at odds with standard usage, sorry to say. Science is just very political and ego driven, and “computer science” is barely even science. Industry has good reasons to mostly ignore it.

      1. 1

        I think that slightly less than all would be more accurate.

        I’m interested to know what you think are the islands. Always on the look out for real research.

        1. 1

          Ah. Well, I can’t really promise that my own idea of “interesting” PL research is at all equivalent to your idea of “real” PL research, but here’s a wee handful of influential ideas, off the top of my head:

          • structured and procedural programming (FORTRAN, ALGOL, Pascal… etc)
          • “objects” (Simula, Smalltalk, a few commercially successful languages you may have heard of)
          • actor semantics for parallel computation (Act 1, CSP, Pi-calculus, Erlang…)
          • logic programming
          • algebraic data types
          • Hindley-Milner type checking

          … I think I’d better stop before I get carried away. All began as academic projects, just like basically all the underlying ideas in our immature little field that didn’t get inherited from our neglectful parents, electrical engineering and mathematical logic. Academia may innovate very little relative to all the hot air it produces, but industry simply does not innovate, as a rule. (It sometimes refines, though, and that’s valuable. Also valuable as a great proving ground, although there are some deep problems with that too; for example, like Kuhn observed, epistemic change happens on generational time scales.)

          1. 1

            These are interesting ways to think about programming.

            Is interesting their only benefit? If half of them never happened, how much difference would it make in practice?

      2. 1

        Much has been gained by the study of PL design, the most impactful one is the notion of Safety and Soundness. There is so much more, but I won’t enumerate them all.

        Are you saying that all the languages that need to be, already exist? Would you want to dissuade someone from designing a new language?

        1. 1

          There has been very little meaningful study of PL design, just lots of proof by ego and bluster.

          Yes, it is possible to prove stuff about suitably restrictive languages. The only impact of this research appears to have been advancing the careers of those involved.

          If somebody wants to invent a new language, that is their business. But let’s not pretend it’s anything other than a vanity activity, unless they do experimental research to back up any claims of usability, readability, maintainability, etc.

      1. 1

        The paper does not come with any data to download :-(

        “However, beyond the observation that the three variables are positively associated, the strength of the associations and their precise relationships are a matter of open debate and controversy in the research community.”

        One solution is for the test community to learn how to do statistical analysis that is more powerful that the t-test. Don’t mean to pick on you, this statement can be said of virtually all researchers in software engineering.

        1. 1

          We will have the data available soon. Will keep you updated.

          1. 1

            Actual link: http://www.coding-guidelines.com/c89.tgz

            Also:

            Unless you happen to have an AT&T 3b2 and know which options to give nroff, you are very unlikely to be able to generate something that looks like C89.

          1. 5

            In Java most methods are short, and 60% of Java code appears in methods containing five of less lines. See figure 7.22 in http://knosof.co.uk/ESEUR/ESEUR-draft.pdf

            It is not surprising that most reported faults occur in short Java methods.

              1. 1

                What takeaways do you have from this paper that proquints could use to improve?

                I only skimmed the paper, but it seems largely focused on naming variables, rather than providing mnemonics for numbers.

                1. 1

                  There are a variety of books on improving recall of information.

                  The basic technique is to associate the information to be learned with information that is already stored in long term memory, e.g., the person who learned to store long sequences of numbers by using his knowledge of record breaking running times.

                  1. 5

                    This isn’t about teaching people how to memorize long strings of digits.
                    It’s about how to replace the strings of digits with something more intrinsically mnemonic.
                    It’s also not about designing memorable names from scratch, which appears to be the topic of the paper you linked.

                1. 11

                  I generally dispute all claims of the sort ‘we should not fund A because the money would be better spent on B’ on the grounds that it is usually possible to do both. The reason we haven’t solved poverty, climate change or eradicable infections is because we simply don’t give those any priority, not because we don’t have the money, and certainly not because of the space program or the LHC.

                  Having said that, I am not suggesting that everything should be funded and nothing is a waste of money, and I do think that discussion is an important one to have with regard to the new collider. I am leaning towards going ahead with it personally but I am far from an expert.

                  A more pertinent waste of money in my opinion is the majority of military expenditure, and also inefficiencies and corruption in the financial markets.

                  1. 4

                    Except that what gets (emergency) funding is very often dirty industries. We don’t need to research climate change as much as we need to put into action what has already been learnt about it (*). Is this thing “expensive”? Quite probably. Is it more expensive than this year’s relief packages for oil-based companies? Nope.

                    (*) actually we might need to do more research on the topic in the future but that’s because we still haven’t put into action anything we’ve learnt about it.

                    PS: don’t forget that social change brings more climate change improvements than anything else because the poorest in developing countries don’t need to burn down forests in order to be able to eat.

                    1. 2

                      A few years ago I was talking to a professor at UT Austin. He pointed out that the Superconducting Supercollider, before it had been cancelled, had built two of the enormous superconducting electromagnets that were meant to deflect particles around the ring. Each one of these cost the US government more than the total amount of US government funding for computer science ever. Physics is important but it receives an insanely disproportionate amount of research funding.

                      1. 10

                        That’s not really surprising though, IMHO. Advances in computer science often yield profitable work very quickly, so there’s an incentive for private industry to pick up funding soon after the government has gotten something off the ground. Physics, especially blue-sky physics research, often takes decades to yield “profitable” science that can be profitably funded by the private sector…if ever.

                        The government should fund things that are important but not profitable. The government’s job isn’t to make money, but to promote the general welfare (and, IMNSHO, knowing the ultimate nature of reality is important to the general welfare…).

                        1. 3

                          This wasn’t always the case, but then physics research became necessary for making nuclear weapons and the rockets to deliver them. Were it not for that then physics would probably get about as much research funding as any other science.

                          1. 3

                            I agree - I believe pure and applied physics was in the right place at the right time to get all the funding it needed during the Cold War.

                            “Well sir, I hear the Russians are working on this project too… it would be a shame if they finished it first.”

                          2. 1

                            Part of this is just how insanely expensive it is to do experimental work. The laser I use in my lab cost ~$50k to build from scratch. Buying a turnkey version, like a national lab would, costs ~$500k. Some groups that do my kind of research use fast CCD cameras, which cost tens of thousands of dollars. You may also need a fast oscilloscope, which can run $30k (I’ve seen $100k-$200k ones, but have never needed one). I do biophysics research, so I’m not even doing something incredibly high tech apart from the laser I use.

                            For theoretical physics you really just need to pay salaries and for time on a cluster. I would guess that computer science research is similar.

                            1. 1

                              It’s also expensive to do some bits of computer science research. If you’re doing computer architecture research, taping out a chip is a good $30m, minimum. Research councils don’t fund anything like this, so only a couple of departments in the world ever do it. Even doing systems research, to do it well you need to employ a handful of research software engineers, but grant funding doesn’t let you do this at anything close to an industry-competitive salary.

                              As a result of this under funding, blue-skies computer science research in a number of core areas is done almost entirely by a handful of industrial research labs.

                              For the CHERI project, the total cost of the research so far is over $20m and we haven’t even taped out a chip. By the time we get to the end of the Digital Security by Design programme, the total spending between government and industrial funding will be over $300m. That is incredibly rare for a computer science research programme, there are hundreds of ideas that deserve the same level of investment that no one is funding and industrial research is not touching because it wouldn’t lead to a competitive advantage.

                              1. 1

                                That’s a good point, I wasn’t thinking about those kinds of expenses at all.

                        1. 2

                          It’s not clear from the webpage why/how this is a breakthrough. Please elucidate.

                          1. 1

                            Inverting the Laplace Transform has been central in many applications (and an issue) for a long time. They develop a method that is not just fast, but superior to all known ones in all respects. The 2019 paper linked in the article is open access, and does a very good job to explain the problem, their solution and the metrics for comparison (the demo at the end of the page already gives a good hint)

                            1. 2

                              I understand the problem and its importance. But the paper seems to be proposing another solution that is better in some cases than the alternatives.

                              I would not call it a breakthrough.

                          1. 2

                            The paper makes a variety of claims about the distribution of data, which I thought were unlikely to apply; which made me suspicious of the results.

                            The author sent me a copy of the data (I emailed him asking), but I was never able to do anything of interest with it.

                            1. 2

                              Grey text set on a black background does not make for a very readable webpage.

                              I am jumping to the conclusion that this is a vanity site and judge the content accordingly.

                              1. 8

                                Regarding the legibility: grey on black can’t be pleasant to read, indeed. I don’t think that style is the author’s intention, though: the web page renders as near-black text on a mostly-white background for me.

                                Regarding the contents: I can tell you that the essay is perhaps the best I have read in months, a rich vein of gems of insight. It’s worth your time even if you have to copy-paste it into a word processor first.

                                1. 5

                                  Install tranquility firefox extension.

                                  Many sites become vastly better as some poor sods CSS, that they slaved away at for weeks, just gets junked and replaced by something simple and soothing.

                                  1. 2

                                    Thanks. Giving it a try.

                                    It certainly solved my grey on black problem.

                                  2. 4

                                    The page was generally readable for me in firefox on windows. I recommend to try reading mode with your preferred color scheme, the page seems to be properly displayed that way, I did not spot missing content.

                                    1. 3

                                      The background is white for me.

                                    1. 0

                                      What was the author smoking when he wrote this?

                                      1. 2

                                        I didn’t spend a lot of time looking but most of the authors are at Google, nVidia, or one of the US government national labs. All three groups are in a position to rewrite or refactor as much of their code as they wish.

                                        • National Labs (6), e.g. Argonne, Livermore, Oak Ridge, Sandia
                                          It wouldn’t surprise me if their simulation software is refactored or rewritten for each supercomputer.
                                        • Google (7)
                                          “Most software at Google gets rewritten every few years.” Software Engineering at Google
                                        • NVidia (2)
                                          GPU libraries and drivers are probably rewritten or refactored for each new generation of hardware.
                                        • Uber (1)
                                        • Unknown (1)
                                        1. 2

                                          Probably splendid isolation from anyone else at Google.

                                          1. 1

                                            Which aspects are umm pipe dreams to you?

                                          1. 35

                                            I have very mixed feelings about this article. Some parts I agree with:

                                            It’s you, the software engineering community, that is responsible for tools like C++ that look as if they were designed for shooting yourself in the foot.

                                            There is very little impetus to build tools that are tolerant of non-expert programmers (golang being maybe the most famous semi-recent counterexample) without devolving entirely into simple toys (say, Scratch).

                                            Some of you have helped with a first round of code cleanup, which I think is the most constructive attitude you can adopt in the short term. But this is not a sustainable approach for the future.

                                            […] always keeping in mind that scientists are not software engineers, and have neither the time nor the motivation to become software engineers.

                                            Yep, software engineers pitching in to cleanup academic messes after the fact definitely doesn’t work. One of the issues I’ve run into when doing this is that you can totally screw up a refactor in ways that aren’t immediately obvious. Further, honestly, a lot of “best practices” can really hamper the explorative liminality required to do research spikes and feel out a problem.

                                            But then, there’s a lot of disagreement I have too:

                                            The scientists who wrote this horrible code most probably had no training in software engineering, and no funding to hire software engineers.

                                            We expect people doing serious science to have a basic grasp of mathematics and statistics. When they don’t, we make fun of them (that is, when the peer review system works properly). If you’re doing computational models, you damned well should understand how to use your tools properly. No experimental physicist worth I damn that I’ve known couldn’t solder decently well–nobody doing science that relies on computers shouldn’t be expected to know how to program competently and safely.

                                            clear message saying “Unless you are willing to train for many years to become a software engineer yourself, this tool is not for you.”

                                            Where’s the clear messaging in the academic papers saying “Yo, this is something that I can only reproduce on my Cray Roflcluster with the UT Stampede fork of Python 1.337”? Where’re the warnings “Our university PR department once again misrepresented our research in order to keep sucking at the teat of NSF and donors, please don’t discuss this incredibly subtle work you’re probably gonna misrepresent.” Where’s the disclaimer for “This source code was started 40 years ago in F77 and lugged around by the PI, who is now tenured and doesn’t bother to explain things to his lab anymore because they’re smart and should just get it, and it has been manhandled badly by generations of students who have been under constant pressure to publish results they can’t reproduce using techniques they don’t understand on code they don’t have the freedom to change.”?

                                            The core of that research is building and applying the model it implemented by the code, the code itself is merely a means to this end.

                                            This callous disregard for the artifact that other people will use is alarming. Most folks aren’t going to look at your paper with PDEs and sagely scratch their chins and make policy decisions–they’re going to run your models and try to do something with the results. I don’t think it is reasonable to disavow responsibility for how the work is going to be used in the future if you also rely on tax dollars and/or bloated student tuition to fund your adventures.

                                            There’s something deeply wrong with academic research and computing, and this submission just struck me as an attempt to divert attention away from it by harnessing the techlash.

                                            1. 18

                                              I’m someone who’s done their (extremely) fair share of programming work in academia, but outside a CS department: I can guarantee that anyone insisting that the solution was simple and that it’s just “they should have hired real software engineers” has had zero exposure to “real software engineers” trying to write simulation software. Or if they had, it was either in exceptional circumstances, or they didn’t actually pay attention to what happens there.

                                              (This is no different to CS, by the way. The reason why you can’t just hire software engineers and expect they’ll be able to understand magnetohydrodynamics (or epidemiology, or whatever else) is the same reason why you can’t just hire electrical engineers or mechanical engineers and expect them to write a Redis clone worth a damn in less than two years – let alone something better.)

                                              As Dijkstra once remarked, the easiest machine applications are the technical/scientific computations. The programming behind a surprising proportion of simulation software is trivial. By the time they’re done with their freshman year, all CS students know enough programming to write a pretty convincing and useful SPICE clone, for example. (Edit: just to be clear, I’m not talking out of my ass here. For two years I’ve picked the short straw and ended up herding CS first-years through it, and I know from experience that two first-year students can code a basic SPICE clone in a week, most of which is spent on the parser). I haven’t read it in detail but from glossing over it, I think none of the techniques, data structures, algorithms and tools significantly exceed a modest second-year CS/Comp Eng curriculum.

                                              Trouble is, most of the domain-specific knowledge required to understand and implement these models far exceeds a CS/Comp Eng curricula. You think epidemiologists who learned C++ on their own and coded by themselves for 10 years write bad simulation code? Wait ’til you see what software engineers who have had zero exposure to epidemiology can come up with.

                                              “Just enough Python” to write a simple MHD flow simulator is something you can learn in a few afternoons. Just enough electromagnetism understand how to do that is a four-semester course, and the number of people who can teach themselves how to do that is very low. I know a few and I know for a fact that most companies, let alone public universities, can’t afford their services.

                                              This isn’t some scholastic exercise. No one hands you a two-page description of an algorithm for simulating how the flu spreads and says hey, can you please turn this mess of pseudocode into C++, I’m not that good at C++ myself. The luckiest case – which is how most commercial-grade simulation software gets written – is that you get an adnotated paper and a Matlab implementation from whoever developed the model.

                                              (Edit: if you’re lucky, and you’re not always lucky, that person is not an asshole. But if you think translating Matlab into C++ isn’t fun, wait until you have to translate 4,000 lines of uncommented Matlab from someone who doesn’t like talking to software engineers because they’re not real engineers).

                                              However, by the time that happens, the innovation has already happened (i.e. the model has been developed) months before, sometimes years. If you are expected to produce original results – i.e. if you do research – you don’t get a paper by someone else and a Matlab implementation. You get a stack of 80 or so (to begin with) papers on – I’m guessing, in this case, epidemiology, biochemistry, stochastic processes and public health policies – and you’re expected to come up with something better out of them (and, of course, write the code). Yeah, I’m basically describing how you get a PhD.

                                              1. 7

                                                I can guarantee that anyone insisting that the solution was simple and that it’s just “they should have hired real software engineers” has had zero exposure to “real software engineers” trying to write simulation software.

                                                I totally agree with this. That’s also why my argument is “researchers need to learn to write better code” and not “we should hire software engineers to build their code for them”.

                                              2. 13

                                                …no funding to hire software engineers.

                                                Speaking as a grant-funded software engineer working in an academic research lab, it’s amazing what you can get money for if your PI cares about it and actually writes it into grant applications.

                                                My suspicion, and I have zero tangible evidence for this, just a handful of anecdotal experiences, is that labs outside of computer science are hesitant to hire software engineers. It’s better for the PI’s career to bring in a couple more post-docs or PhD students and expect them to magically become software engineers than to hire a “real” one.

                                                Another interesting problem, at least where I work, is that the pay scale for “software engineer” is below market. I’m some kind of “scientist” on paper because that was the only way they could pay the position enough to attract someone out of industry.

                                                1. 5

                                                  Speaking as a grant-funded software engineer working in an academic research lab, it’s amazing what you can get money for if your PI cares about it and actually writes it into grant applications.

                                                  Oh, totally agree. I’ve made rent a few times by being a consulting software engineer, and it’s always been a pleasure to work with those PIs. Unfortunately, a lot of PIs just frankly seem to have priorities elsewhere.

                                                  I’ve heard also that in the US there’s less of a tradition around that, whereas European institutions are better about it. Am unsure about this though.

                                                  Also, how to write code that can survive the introduction of tired grad students or energetic undegrads deserves it’s own consideration.

                                                  1. 6

                                                    Yeah, “Research Software Engineering” is a pretty big thing in the UK at least… https://society-rse.org.

                                                    1. 11

                                                      It is (I’m an RSE in Oxford). It costs as much within bizarre University economic rituals for a researcher to put (the equivalent of) one of us (full time, but what they usually get is that time shared across a team of people with various software engineering skills and experiences) on a project as it would to hire a postdoc research assistant, and sometimes less. Of course they only do that if they know that they have a problem we can help with, and that we exist.

                                                      Our problems at the moment are mostly that people are finding out about us faster than we’re growing our capability to help them. I was on a call today for a project that we couldn’t start before January at the earliest, which is often OK in the usual run of research funding rounds, less OK for spin-out and other commercial projects. We have broken the emergency glass for scheduling Covid-19 related projects by preempting other work, I’ve been on one since March and another was literally a code review and plan for improvement as the linked project got after it was shared. We run about 3 surgery sessions a week on helping researchers understand where to take their software projects, again that only lands with people who know to ask. But if we told more people they could ask, we’d be swamped.

                                                      While we’re all wildly in agreement that this project got a lot of unfair context-free hate from the webshits who would gladly disrupt epidemiology, it’s almost certainly the case that a bunch of astrophysicists somewhere are glad the programming community is looking the other way for a bit.

                                                      1. 3

                                                        I’m an RSE in Oxford

                                                        A lot of UK universities don’t have an RSE career track (I’ve been helping work to get one created at Cambridge). It’s quite difficult to bootstrap. Most academics are funded out of grants. The small subset with tenure are funded by the department taking a cut of all grants to maintain a buffer for when they’re not funded on specific ones. Postdocs are all on fixed-term contracts. This is just about okay if you regard postdoc as a position like an extended internship, which should lead to a (tenured) faculty position but increasingly it’s treated as a long-term career path. RSE, in contrast, does not even have the pretence that it’s a stepping stone to a faculty job. A sustainable RSE position needs a career path, which means you need a mechanism for funding a pool of RSEs between grants (note: universities often have this for lab technicians).

                                                        The secondary problem is the salary. We (Microsoft Research Cambridge) pay starting RSEs (straight out of university) more than the UK academic salary scale pays experienced postdocs or lecturers[1]. RSEs generally expect to earn a salary that is comparable to a software engineer and that’s very hard in a university setting where the head of department will be paid less than an experienced software engineer. The last academic project I was on had a few software engineers being paid as part-time postdocs, so that they had time for consulting in the remaining time (a few others we got as contractors, but that was via DARPA money that is a bit more flexible).

                                                        The composition of these two is a killer. You need people who are paid more than most academics, who you are paying out of a central pool that’s covered by overhead. You can pay them much less than an industry salary but then you can’t hire experienced ones and you get a lot of turnover.

                                                        [1] Note for Americans: Lecturer in British academia is equivalent to somewhere between assistant and associate professor: tenured, but junior.

                                                        1. 2

                                                          Postdocs are all on fixed-term contracts.

                                                          Happy to talk more: what we’ve done is set up a Service Research Facility, which is basically a budget code that researchers can charge grant money against. So they “put a postdoc” on their grant application, then give us the money and get that many FTEs of our time. It also means that we can easily take on commercial consultancy, because you multiply the day rate by the full economic cost factor and charge that to the SRF. A downside is that we have to demonstrate that the SRF is committed to N*FTE salaries at the beginning of each budget year to get our salaries covered by the paymasters (in our case, the CS department), making it harder to be flexible about allocation and side work like software surgeries and teaching. On the plus side, it gives us a way to demonstrate the value of having RSEs while we work to put those longer-term streams in place.

                                                          The secondary problem is the salary […] so that they had time for consulting

                                                          You’re not wrong :). I started by topping mine up with external commercial consultancy (I’ve been in software engineering much longer than I’ve been in RSE), but managed to get up to a senior postdoc grade so that became unnecessary. I’m still on half what I’ve made elsewhere, of course, but it’s a livable salary.

                                                          Universities and adjacent institutions (Diamond Light Source, UKAEA, Met Office/ECMWF all pay more but not “competitive” more) aren’t going to soon be comparable to randomly-selected public companies or VC funded startups in terms of “the package”, and in fact I’d hate to think what changes would be made in the current political climate to achieve that goal. That means being an RSE has to have non-monetary incentives that being a FAANG doesn’t give: I’m here for the intellectual stimulation, not for the most dollars per semicolon.

                                                          A sustainable RSE position needs a career path, which means you need a mechanism for funding a pool of RSEs between grants (note: universities often have this for lab technicians).

                                                          I’m starting a DPhil (same meaning as PhD, different wording because Oxford) on exactly this topic in October: eliciting the value of RSEs and providing context for hiring, training, evaluating and progressing RSEs. I’ve found in conversations and panel discussions at venues like the RSE conference that some people have a “snobbish” attitude to the comparison with technicians, BTW. I’m not saying it’s accurate or fair, but they see making the software for research as a more academically-valid pursuit than running the machines for research.

                                                          1. 2

                                                            Thanks, that’s very informative. Let me know if you’re in Cambridge (and pubs are allowed to open again) - I’ll introduce you to some of our SREs.

                                                        2. 2

                                                          Seeing as you seem to have experience in the field, from a very high level view, does the complaints about this project seem valid or not? I understand that one could only make an educated guess considering this is 15K lines, hotly debated, and also a developing situation (the politics… Whoo boy!), but I would love to have someone with experience calibrate the needle on the outrage-o-meter somewhat.

                                                          1. 1

                                                            I haven’t examined the code, which is perhaps a lesson in itself.

                                                            1. 1

                                                              As a baseline I put the code through clang’s scan-build and it found 8 code flows where uninitialized variables may affect the model early in the run. It’s possible that not all them can realistically be triggered (it doesn’t know all dependencies between pieces of external data), but it’s not a great sign.

                                                              Among others that’s a reasonable explanation why people report that even with well-defined random seeds they see different results, and I wouldn’t count “uninitialized variables” in the class of uniform randomness, so I’d be wary about just averaging it out.

                                                          2. 2

                                                            If you cannot pay somebody much, give them a fancy title, e.g., “Research Software Engineering”. It’s purely an HR ploy.

                                                      2. 6

                                                        It’s you, the software engineering community, that is responsible for tools like C++ that look as if they were designed for shooting yourself in the foot.

                                                        There is very little impetus to build tools that are tolerant of non-expert programmers (golang being maybe the most famous semi-recent counterexample) without devolving entirely into simple toys (say, Scratch).

                                                        I actually agree with the author on this.

                                                        Let’s not even pretend that the only alternative to the absolutely mind-boggling engineering and design shit show that is C++ is “devolving entirely into simple toys”.

                                                        1. 1

                                                          Rust?

                                                          1. 1

                                                            One option.

                                                        2. 4

                                                          I think you put it very well. Look: if there’s a hierarchy of importance I’m happy to put science far ahead of software development. But the fact remains: when it comes to producing scientific results using software, software developers do know a thing or two about how hard it is to fool yourself and we are rightly horrified at someone handwaving away lack of tests and input validation by “a non-programmer expert will look at this code and make sure not to hold it wrong”

                                                          I guess in that sense it’s not much different than the rampant misuse of statistics in science, it’s just that software misuse might be currently flying a little below the radar.

                                                          1. 4

                                                            exactly. It is the job of the researcher to be aware of the limitation of his own limited ability to implement his model with a particular tool. To badly implement something then make grandiose claim that the result of said badly implemented model should inform decision that affect millions, is his own fault.

                                                            You can’t blame a screw driver ‘community’ if you use it badly and poke yourself in the eye. Not even the lack of “do not poke eye with screwdriver” warning label counts as failure.

                                                            1. 1

                                                              This plays out in an interesting way at Google’s Research division. Whatever else you might think about the company, Google software engineers (SWEs) are generally pretty decent. Many of them are interested in ML research projects because they’re all the rage these days. The research teams, of course, just want to do research. But they can get professional SWEs to build their tools for them by letting them feel like they’re part of cutting edge research. So they end up with a mix of early-career SWEs building tools that aren’t inherently all that interesting or challenging but get used to do very interesting and impactful research work and a few more experienced SWEs who want to make the transition into doing research.

                                                            1. 6

                                                              Some of the claimed code reviews do appear to be all about point scoring.

                                                              Writing code has a low status in academia and the people doing it are essentially recent graduates, so most of the (scientific) code is awful. The most worrying aspect is the lack of tests.

                                                              My take on the all source in one file issue.

                                                              1. 6

                                                                It’s even worse than that, in my experience. Many scientific coders haven’t even graduated at all, and are working under unhealthy levels of pressure in environments where they have very little autonomy or professional standing. Few have any training in even the rudimentary software engineering practices that CS students are drilled in, and we know how inadequate those can be.

                                                                It’s pretty bad out there. Egotism and political machinations are woven through the culture of science. Poor engineering certainly increases the risk of bad results. Bad results increase the risk of bad policy. Political bias (internal, or otherwise) often drives funding in competitive fields, incentivizing sloppy practices that increase publication velocity and further the policy agendas of (often decidedly non-disinterested) funders, closing the loop on a vicious cycle.

                                                                Science is hard enough, even in boring disciplines that don’t attract much outside attention. Software is also hard enough already. We should be able to work together in good faith to improve the quality of research, but when we can’t, there are some pretty deep structural problems that become a little more visible.

                                                              1. 2

                                                                ALGOL 60 Implementation by Randell and Russell is a fantastic introduction to writing an Algol compiler. The techniques described became the standard way of doing things for a decade or two, with many still in use.

                                                                Lots of other interesting stuff at softwarepreservation.org

                                                                1. -2

                                                                  Language design is a vanity project. Why would anybody fund somebody else’s vanity project when they could use the money to fun their own vanity project?

                                                                  If somebody enjoys designing and implementing their own language, then good for them for being able to do something they are passionate about.

                                                                  The vanity (or strategic marketing, depending on your point of view) language development of corporate languages are invariably going to be better funded than individual, or group, projects.

                                                                  The world does not owe anybody a living. Be thankful for having the resources to spend time on a vanity project.

                                                                  1. 18

                                                                    Why would anybody fund somebody else’s vanity project when they could use the money to fun their own vanity project?

                                                                    Because you find value in it. The same reason people pay subscriptions to Netflix or their favorite YouTuber, or have subscriptions to Patreon’s of game modders or anyone else.

                                                                    The world does not owe anybody a living. Be thankful for having the resources to spend time on a vanity project.

                                                                    Where does this sentiment come from? I didn’t read anything about anyone owing anyone anything in the linked post.

                                                                    1. 9

                                                                      Can you define “vanity project” here? It seems you are making a value judgment, the phrase implies that such projects have little value aside from stroking one’s ego. I wonder what has value, in your eyes.

                                                                      Are you saying that because computer languages already exist, there is no value to having new languages?

                                                                      Do humans already communicate perfectly with computers? Do computers perfectly meet humanity’s needs? Are computer programs free of bugs and vulnerabilities? Are all programs fast and efficient, user-friendly, and easy+quick to develop properly? Is there no room for improvements over existing languages that might help address these issues?

                                                                      1. 9

                                                                        For elm specifically its designers seem to have very strong opinions on how to do things “right”, to the detriment of users (see e.g. https://dev.to/kspeakman/elm-019-broke-us--khn)

                                                                        A major way to have a software project create a steady income flow is to get companies on board (they’re much less cost sensitive than individual users) but pulling the rug under their feet is a sure way to make sure that this won’t happen.

                                                                        So for elm specifically, I think “vanity project” is an apt description.

                                                                        1. 3

                                                                          Agreed, and “getting companies on board” doesn’t necessarily mean compromising design decisions like he describes. If people are willing to invest in your alternative language that means that they largely agree with your design principles and values. But it does mean providing the kinds of affordances and guarantees that allow an organization to be in control of their own destiny and engineer a robust and maintainable system. Elm has had almost no energy invested into these concerns.

                                                                        2. 1

                                                                          I see nothing wrong with a project whose purpose is enjoyment, that includes some amount of stroking of ego.

                                                                          Finding out which language features have the greatest amount of some desirable characteristic requires running experiments. I’m all for running experiments to see what is best (however best might be defined).

                                                                          Creating a new language and claiming it has this, that or the other desirable characteristics, when there is no evidence to back up the claims, is proof by ego and bluster (this is a reply to skyfaller’s question, not a statement about the linked to post; there may be other posts that make claims about Elm).

                                                                          1. 9

                                                                            How would a person establish any evidence regarding a new language without first designing and creating that new language? I agree that evidence for claims is desirable, but your original comment seems to declare all new language design to be vanity (i.e. only good for ego-stroking), and that’s a position that requires evidence as well. Just because a language has not yet proven its value does not mean it has no value. Reserving judgment until you can see some results seems a more prudent tactic than, well, prejudice.

                                                                            1. 0

                                                                              First work out what language features are best, then design the language. There are plenty of existing languages to experiment with.

                                                                              Design/implement language, get people to learn it, write code using it, and then run experiments is completely the wrong way of doing things.

                                                                              1. 2

                                                                                How do you work out which features are best if the ones you’re trying don’t exist yet? Wouldn’t that require designing and implementing them and then let people use them?

                                                                                1. 1

                                                                                  To be able to design/implement a language feature that does not yet exist, somebody needs to review all existing languages to build a catalogue of existing features; or, consult such a catalogue if it already existed.

                                                                                  I don’t know of the existence of such a catalogue, pointers welcome.

                                                                                  Do you know of any language designer who did much more than using their existing knowledge of languages?

                                                                                  1. 3

                                                                                    You wouldn’t have to know all existing language features to invent a new approach, and the only way to test a new approach would be to build it and let people use it.

                                                                                    I think I’m lost as to where your argument is headed.

                                                                        3. 5

                                                                          Why would anybody fund somebody else’s vanity project when they could use the money to fun their own vanity project?

                                                                          Because they realise that there’s greater benefit in them having the other project with increased investment than in their own project. The invisible hand directs them to the most efficient use of resources.

                                                                          Because they realise an absolute advantage the other project has in producing a useful outcome, and choose to benefit from that advantage.

                                                                          Because they are altruists who see someone doing something interesting and decide to chip in.

                                                                          Because they aren’t vain.

                                                                        1. 3

                                                                          Yggdrasil

                                                                          Slackware

                                                                          RedHat

                                                                          Suse, OpenSuse, openSUSE Tumbleweed

                                                                          1. 2

                                                                            Quorum is divisive in the programming-language theory and design worlds. While Quorum appears to be usable as a language, it is excessively mediocre and does not advance the state of the art. On the other hand, there’s nothing especially wrong with mediocre languages.

                                                                            As an example of Quorum lagging behind the times, consider the object model. Quorum endows all objects with a very poor selection of methods, focusing on equality and ignoring everything else. Since objects can lie, objects must not be responsible for determining their own hash codes and equality; instead, some external global objects ought to be available for performing equality comparisons. This was known theoretically in the early 90s and by the late 90s, languages like E had demonstrated and fixed this set of flaws.

                                                                            There is no metaprogramming whatsoever. As a consequence, there are over 120 classes representing individual HTML tags, like this class for <abbr> tags. In Monte, a modern dialect of E, I represented every HTML tag polymorphically and using metaprogramming, in under 100 lines of code. My hands are tired; I do not have the strength to write out hundreds of modules in the Java tradition.

                                                                            On accessibility, Quorum would be very interesting if it were oriented exclusively around being accessible. Quotes on pages like this one suggest so:

                                                                            When it began, Quorum was used exclusively at schools for the blind or visually impaired.

                                                                            I did not know this before today, despite having examined Quorum multiple times. It is not a well-advertised fact. In the entire list of builtin libraries, I could only find one class that relates to this, and one tutorial lab. I wonder whether Quorum really is designed for the task of being accessible.

                                                                            It is not an accident or coincidence that Quorum has no usage whatsoever on Rosetta Code.

                                                                            1. 2

                                                                              I think I get your point; a new programming language doesn’t need to be cutting-edge, but whatever features it does support should be implemented in a reliable way. I also appreciate why the latter point is especially important in a language for beginners, since they’re (hopefully) trying to lay a strong conceptual foundation, and the language shouldn’t impede that. I haven’t had any involvement in the design of the Quorum language, so I don’t have any more to say on that subject.

                                                                              I think there are good reasons not to over-emphasize the accessibility angle. If Quorum were pigeonholed as the language for blind students, then it wouldn’t spread into mainstream classrooms. While schools for the blind do exist, and some of them use Quorum, most blind students, from elementary school up through university, are in mainstream classrooms. So if these students are to benefit from the accessibility of the Quorum environment and libraries, without being isolated from the rest of their class by using something different than the class as a whole, then Quorum needs to spread into mainstream classes, which it’s already doing IIUC. Now, it would be unreasonable to expect a mainstream teacher to use Quorum for the whole class just for the benefit of a blind student. But the hope, if not yet the reality, is that Quorum will be beneficial for all students.

                                                                              Also, if people perceive Quorum as the language for blind students (or blind programmers in general), then that gives them an overly limited view of what blind people can do. A blind programmer friend of mine expressed his concern about that possibility.

                                                                              As for the fact that Quorum isn’t on Rosetta Code, that merely means that it’s not interesting to language nerds. That doesn’t count for much IMO.

                                                                              1. 1

                                                                                What makes a language mediocre? The fact that it does not align with your views is a purely personal metric.

                                                                                What does advance the state of the art mean? A language that has some obscure construct that nobody has thought of yet?

                                                                                Quorum might be divisive because it is trying to use evidence to drive feature selection. I can see that being a problem for people involved in vanity research.

                                                                                1. 1

                                                                                  In terms of teaching curricula, I’d consider “advancing the state of the art” things like Scratch, Pyret, or Hedy. Hedy in particular is really interesting to me as it has multiple syntax levels for introducing ideas gradually.

                                                                                  1. 1

                                                                                    What would advance the state of the art is people running experiments to show which features do what is claimed of them, and which don’t.

                                                                                    Existing claims are based on ego, vanity and arm waving (there have been a few inconclusive studies).

                                                                                    1. 1

                                                                                      I believe the inventors of both Pyret (Shriram Krishnamurthi) and Hedy (Felienne Hermans) have done lots of quantitative research on software education.

                                                                                      1. 1

                                                                                        I was not previously aware of the Pyret/Shriram Krishnamurthi work.

                                                                                        A quick scan of a dozen or so of his most recent experimental-looking papers finds they are essentially write-ups of student test results. Not experiments, as such. One was a survey (I treat surveys as fake research).

                                                                                        Can you point me at the good experimental papers.

                                                                                  2. 1

                                                                                    I will proceed along the lines of discussion that I already introduced.

                                                                                    First, there is a basic metric that we can use for language quality, Kolmogorov complexity, with respect to some basket of chosen problems. In short, shorter programs are better. Additionally, we can use Shutt’s insight and compare the first and second derivatives of semantics for languages. The first derivative is like Kolmogorov complexity, while the second derivative suggests that not just shorter programs are valuable, but also shorter patterns for reusing subprograms and abstracting their usage.

                                                                                    Putting this together, my critique is simply that Quorum isn’t very good on either metric. My own pet language, Monte, is both terrible and also handily surpasses Quorum.

                                                                                    To the state of the art, first, note that identity and equality operators have been a deep philosophical topic in our field for a long time. Baker’s paper is from 1990. By 1996 or so, E had added an “equalizer” object which performs identity and equality comparisons, rather than delegating responsibility to individual classes. To quote from linked E documentation:

                                                                                    The traditional language specification would state “a’s behavior and b’s behavior must agree” (see for example the specification of “equals” in Java). However, when code is integrated from multiple sources, such specifications are incoherent. If “a” and “b” don’t agree, who’s at fault? In Java, once someone adds the following code to a system,

                                                                                    /**
                                                                                     * A "more equal" class of objects
                                                                                     */
                                                                                    public class OrwellPig {
                                                                                        public boolean equals(Object other) {
                                                                                            return true;
                                                                                        }
                                                                                        public int hashCode() {
                                                                                            return 0;
                                                                                        }
                                                                                    }
                                                                                    

                                                                                    it is equally valid, from the Java spec, to say OrwellPig is buggy as it is to say OrwellPig is correct and the equals/hashCode behavior of all other classes is buggy. In a programming model intended to support the interaction of mutually suspicious code (which Java claims to be), such diffusion of responsibility is unacceptable.

                                                                                    This dire bug in the Java object model remains in Quorum’s model, and is the only present feature. Quorum’s oldest papers date to 2003, and its code dates to 2011; they did not need to copy this mistake from Java, but they chose to copy it anyway, from one decade to another.

                                                                                    The state of the art, more generally, has been to understand that objects are not just a ubiquitous pattern, but fully isomorphic to certain other models of programming. Both actors and microservices are isomorphic to a certain style of object-based design where objects do not treat each other as bags of bytes, but as private agents who consent to computation, and where objects do not merely call one another, but send messages to each other. In doing so, we are constantly unwinding our mistakes and removing overly-delegated abilities which ended up being too easy to misuse. For example, Java has low-level object serialization and marshalling, and many languages like Python and Ruby also implemented subsystems like this; however, today, we generally realize that objects need to be designed to be serialized, and also that plain-old-data formats like JSON are preferable to serialized-code formats like pickle.

                                                                                    I understand the desire to use evidence. I hope that you can see that there are more kinds of evidence than merely just randomized controlled tests which examine how well undergraduate students can read code; I am not just providing links so that folks will have reading material, but because I believe that these links point to evidence which supports my position. I have examined Quorum’s evidence. I think that only one of the cited papers actually uses Quorum. In that paper, Quorum is compared against popular and common languages of the day for readability by novices. Quoting this PDF of the paper:

                                                                                    Post-hoc Tukey HSD tests reveal that Quorum was rated as as statistically significantly more intuitive than Go (p<0.001), C++ (p<0.001), Perl (p<0.001), Python (p<0.001), Ruby (p<0.024), Smalltalk (p<0.001), PHP (p<0.001), and approached significance with Java(p=.055). The result holds generally for programmers as well, except that there was no statistical difference between C++, Java, and Quorum, for these users (an example of the interaction effect). Obviously, this does not mean that Quorum is more intuitive than these languages, but it does mean that novices in our sample certainly perceived it to be.

                                                                                    Yikes. So, with a manual adjustment for Ruby, Quorum gets to soar above the competition to be almost as readable as Java to neophytes. I would accept this study as finding some interesting facts about local maxima in the context of USA culture and language, but not as a deep statement about how our society ought to write code. And meanwhile, Quorum still isn’t very desirable. Would you write code in Quorum?

                                                                                1. 8

                                                                                  Have you read Bill Kent’s essay on this? I think you’d really like it.

                                                                                  The choice of syntax is partially due to heritage: F# is based on ML, which is based on math, and JavaScript syntax is based on Java -> C -> Algol -> FORTRAN.

                                                                                  This is incorrect. Algol does not derive from FORTRAN. Additionally, neither Algol nor FORTRAN follow C’s style of equality and assignment. Algal uses := for assignment and = for equality, while FORTRAN uses = and .EQ.. C actually gets its style from BCPL, which got its own style from a deliberate simplification of CPL. I wrote a bit more about this here.

                                                                                  Also, ML has mutable assignments with :=.

                                                                                  1. 3

                                                                                    Thanks for the correction; I’ll update the post.

                                                                                    No, I hadn’t seen that essay; thanks!

                                                                                    Edit: This chart indicates otherwise? It’s a minor point in the article, but I’m interested in the truth. Why do you say “Algol does not derive from FORTRAN?”

                                                                                    1. 4

                                                                                      Edit: This chart indicates otherwise? It’s a minor point in the article, but I’m interested in the truth. Why do you say “Algol does not derive from FORTRAN?”

                                                                                      Oop, I could be completely wrong here! I’d have to go and review all my notes on that. This is all stuff I’m now pulling out of my butt:

                                                                                      In Favor:

                                                                                      • John Backus worked on both
                                                                                      • Everybody knew about Fortran at the time

                                                                                      Against:

                                                                                      • None of the Algol material I could dig up mentioned FORTRAN
                                                                                      • I haven’t found any “language cognates” in Algol that could have come from FORTRAN
                                                                                      1. 3

                                                                                        I suspect the truth is somewhere in between. Lots of languages influenced Algol, but a straight line from FORTRAN may be overstating the facts.

                                                                                      2. 4

                                                                                        Fortran originally just had .EQ., .NE., .GT., etc. Support for = came later.

                                                                                        Fortran and Algol coevolved to some degree, so they cannot be placed in a tree.

                                                                                        1. 3

                                                                                          I think ALGOL derived from FORTRAN about as much as any other language [edit: ..that existed at the time]. It would depend if we’re talking ALGOL 60 specifically, or 58 (arguably closer to FORTRAN), or the whole “family”.

                                                                                          The last page of The Early Development of Programming Languages sums it up really well.

                                                                                        2. 2

                                                                                          Also ALGOL is based heavily on mathematics.

                                                                                          1. 2

                                                                                            Have you read Bill Kent’s essay on this?

                                                                                            I think we need to give you the “Bill Kent Stan Account” hat. Not that I’m complaining; I’ve liked what I’ve read.

                                                                                            1. 4

                                                                                              This is the nicest thing anyone’s ever said to me

                                                                                              1. 1

                                                                                                Hey, at least it’s a Twitter display name!

                                                                                          1. 1

                                                                                            If I understand the abstract correctly, then my aptitude for programming must be horrible based upon my Spanish and German report cards.

                                                                                            1. 2

                                                                                              They were essentially measuring ability to learn enough to pass tests.

                                                                                            1. 5

                                                                                              This paper discusses an interesting experiment (that involved a lot of work). An interesting blog post on the article ;-)

                                                                                              1. 1

                                                                                                I was led to it from your other post, and I wanted to highlight that as a separate post :). I am interested in what the community might say about the utility of fuzzing in areas that may not directly involve security (such as compilers).

                                                                                                1. 2

                                                                                                  I think compilers are fundamental to security, and I wish that were a more common position.

                                                                                                  1. 2

                                                                                                    They certainly are, in that one needs to trust the compiler. However, I wonder what the security impact of a bug in a compiler is that is never transmitted to the compiled artifact.

                                                                                                    1. 1

                                                                                                      Perhaps nothing, in that case. I wouldn’t want to go on-record predicting it’ll never matter; novel security vulnerabilities are often related to things that everyone assumed shouldn’t have been security-relevant in the first place. That does seem like an unlikely type of bug to be an issue, though.

                                                                                                  2. 1

                                                                                                    There is always the “whoopee do” post that helped germinate the original work :-)

                                                                                                    I think the that grammar base fuzzing has practical, non-security uses, provided the rule probabilities are realistic.

                                                                                                    And I keep meaning to write something about most existing mutation research being a complete waste of time (of course citing your PhD thesis to back up my claims). People need to research how to generate ‘bigger’ mutations, or move onto something else.

                                                                                                1. 3

                                                                                                  Now if only there was a matching dpkg for each one…..

                                                                                                  Sadly, fuzzing is mostly an academic paper mill rather than software mill.

                                                                                                  Out of the box on ubuntu you only get

                                                                                                  afl/bionic,now 2.52b-2 amd64 [installed] instrumentation-driven fuzzer for binary formats

                                                                                                  afl-cov/bionic,bionic 0.6.1-2 all code coverage for afl (American Fuzzy Lop)

                                                                                                  fusil/bionic,bionic 1.5-1 all Fuzzing program to test applications

                                                                                                  libfuzzer-9-dev/bionic-updates,bionic-security 1:9-2~ubuntu18.04.2 amd64 [installed] Library for coverage-guided fuzz testing

                                                                                                  wfuzz/bionic,bionic 2.2.9-1 all Web application bruteforcer

                                                                                                  zzuf/bionic,now 0.15-1 amd64 [installed] transparent application fuzzer

                                                                                                  1. 2

                                                                                                    Interesting that https://gitlab.com/akihe/radamsa is not packaged in apt. IIRC E.g. homebrew has it. Well, best idea to install from the source anycase – usually you want the latest and the best when fuzzing.

                                                                                                    1. 1

                                                                                                      Fascinating.

                                                                                                      Nice idea…..

                                                                                                      Very light on dependencies… trivial to build and install.

                                                                                                      Comes along with it’s own Scheme interpreter and a bunch of scheme programs by the look of it!

                                                                                                    2. 2

                                                                                                      We have enough fuzzers already, we want people to run them and find interesting stuff.

                                                                                                      Anything slightly useful eventually becomes an academic paper mill. It’s the nature of the system. Research is like VC investments, most die and a few take off spectacularly.

                                                                                                      Anyway back to fuzzing.

                                                                                                      Here is a discussion of one of the listed papers, which I thought was excellent work: http://shape-of-code.coding-guidelines.com/2020/01/27/how-useful-are-automatically-generated-compiler-tests/

                                                                                                      1. 2

                                                                                                        Yes, I read your blog post when it popped up on my feed.

                                                                                                        Very interesting indeed.

                                                                                                        It reminds me of a moment of “Too Much Truth in Advertising”….

                                                                                                        One of the big static analysis firms used to have an page of recommendations from happy customers.

                                                                                                        One customer, a Big household name, said something like, “We used X to find ten’s of thousands of real bugs in code that we have been shipping to our customers for more than a decade!”

                                                                                                        Which immediately told me most bugs are never found in testing, and if they are, they’re probably aren’t triggered, and if they are, it probably doesn’t matter…..

                                                                                                        Which also says, by far, most software is in the grade of serving up cat pictures, if it fails to serve up one to one person… who cares? ie. Most software isn’t doing stuff that really matters.

                                                                                                        Which also says, in the fields where it really really really does matter (Avionics / Self driving cars / ….) by far most practical experience of software engineering isn’t really relevant.

                                                                                                        Except as a warning that “Here be Dragons! Do you really want to trust this stuff?

                                                                                                        And also, don’t use C. I’m not sure what The One True language is…. but I bet it is one that makes automated whole static analysis a lot easier than C does.

                                                                                                        All this said, to me, defects really do matter, even if you’re only serving cat pictures….

                                                                                                        Why?

                                                                                                        Because testing and debugging a change built on top of pile of flakiness is much much much harder than testing and debugging one built on a rock solid foundation.

                                                                                                        Because as our systems get bigger and bigger built on more and more layers, the probability of one the tens of thousands of very low probability bugs biting us tends to one.

                                                                                                        As usual, MonkeyUser puts it succinctly… https://www.monkeyuser.com/assets/images/2019/139-mvp.png

                                                                                                        Which brings me back to fuzzing, I’m using fuzzing and watching the field because of one simple habit.

                                                                                                        When I start working on an area of code…. I stress the hell out of it, and make it rock solid.

                                                                                                        Then I start with any enhancements……

                                                                                                        Then I stress the hell out of my work.

                                                                                                        1. 2

                                                                                                          We have enough fuzzers already, we want people to run them and find interesting stuff.

                                                                                                          I would say, not really. In the hierarchy of fuzzers, we are struggling to go till or beyond level 3, that is we can generate syntactically valid programs if we have the grammar, but anything beyond is really hard. We are still making progress, but we are no where near fuzzing programs that take multi-level inputs (most of the interesting stuff happens beyond the first level parsers).

                                                                                                          Sadly, fuzzing is mostly an academic paper mill rather than software mill.

                                                                                                          Unfortunately, I agree with this mostly. Quite a lot of fuzzing papers seem to be making rather limited improvements to the domain, and original ideas are few and far between. I believe that part of the reason is faulty measurement. Given a target such as a benchmark suite of programs, and faults, it is relatively easy to over optimize for them. On the other hand, finding bugs in numerous programs often may only mean that you went looking for them, and may not say anything more about the impact or originality of your approach.

                                                                                                          1. 1

                                                                                                            Quite a lot of fuzzing papers seem to be making rather limited improvements to the domain, and original ideas are few and far between.

                                                                                                            Actually I’d argue most fuzzing papers are tweaks on afl (or to a lesser extent) on libfuzzer, since they are so easily available.

                                                                                                            If the first task in reproduction is downloading and building /patching/fixing an arcane set of out of date dependencies…. no giants are going to be standing on your shoulders.