1. 31

    I thought this was going to be about small websites, in the sense of individuals putting together a non-commercial, unmonetized page to talk about the things that they enjoy on the public internet. Gems like stormthewalls.dhs.org, or hovercrafter.com. This kind of site seemed to be everwhere in the 90s through the late 2000s, but they seem to have retreated behind walled gardens, or gotten squeezed out of search results by large commercial pages with far worse content. They’re run by hobbyists, and can’t compete with SEO.

    I miss them.

    1. 3

      That’s what I was expecting to and was sorely disappointed to see it was not that.

      1. 1

        likewise. i would have loved to read that article, and the lobsters discussion around it.

      2. 1

        https://neocities.org/browse turns up fun stuff like that sometimes.

      1. 1

        Congratulations on moving to public alpha! I just signed up to support the project. Big fan of the user-supported model and the licensing model.

        1. 1

          Thank you! I hope you like it, please send along your feedback as it comes!

        1. 4

          Great to see that data get used. One question I had after skimming the paper: how much of this performance improvement do you attribute to Noria’s data-flow computation? One of the frustrations comparing these things is that MySQL has a lot of features that Noria presumably doesn’t, makes it sort of apples-and-oranges. But I really like this model of incremental computation, I see it as a larger popular trend in programming tools.

          1. 3

            Hmm, I’m not entirely sure I understand the question, but I think maybe there’s a misunderstanding about how the system works. Most of the performance improvement in Noria is due to the fact that basically all SELECTs are now direct cache hits. The data-flow is just the way that we ensure that the results for those SELECT queries remain updated as the underlying data changes (e.g., as new votes are added). This design basically decouples the performance of reads and writes: reads are always fast, and the write throughput is determined by how many things the write touches; we’re nowhere near the limit of how many reads per second we could do for the lobsters queries (see the vote results). As queries get more complicated (and again, we can support all the queries in the lobsters source code), reads do not get any slower, only writes do. It is true that now the writes are slower, but they are also rarer. In the Lobsters workload at the moment, the biggest bottleneck is the write path for updates to read_ribbons, and that’s what’s preventing us from scaling beyond 5x MySQL. Sharding that write path may be the way to resolve that issue down the line.

            As for feature parity, I don’t know exactly what you’re referencing? Is there a particular feature you’re worried about Noria not having that you rely on for MySQL? Not sure if it came across in the paper, but you can take unmodified applications that just use mysql client libraries and just plug’n’play them with Noria. At least that’s the idea modulo our SQL parser and query planner still not being quite as mature as MySQL’s.

            1. 4

              So Noria maintains materialized views, sort of like flexviews but with automatic refreshing or like pipelinedb but base data is permanent (table) rather than ephemeral (stream). Also reminds me somewhat of ksql. And since it is the database, the application doesn’t need to handle complicated and error-prone cache invalidation (e.g. in the typical MySQL + memcache scenario). Pretty neat!

              I had the same question about apples-to-oranges comparison though. For example, transaction support, foreign keys, different index types, triggers, rocksdb vs innodb implications.

              1. 3

                Yup, you are totally right that there are features of more traditional databases that we do not yet support. This is still a research prototype, so it’s focused on the research problems first and foremost. We don’t believe any of those additional features to be fundamentally impossible in the Noria paradigm though — for example, we’re designing a scheme for adding transactions, and we believe we can do it without adding much overhead to query execution in the common case!

                Some of these other features are also really optimization details. For instance, since Noria knows the application’s queries, it could automatically choose indexes that fit the query load (even though currently it only uses hash indexes). Similarly, RocksDB vs InnoDB shouldn’t matter to the application. We use RocksDB only for storing the base table data, not for storing anything else, so it’s mostly just there for persistence, and rarely affects performance.

                As for foreign keys and triggers, those should be pretty easy to add, and mostly just need engineering, not research. In a sense, triggers are really just additional operators in the graph, so they’re almost a non-feature in Noria.

                1. 3

                  You may also find the discussion on Reddit interesting.

                2. 2

                  My question isn’t about how the system works, it’s about the breadth of MySQL, which pays a performance cost for lots of features I presume Noria doesn’t have. Multi-master setups, sharding, charset collations, many more data types, support for at least five operating systems, date and time functions, multiple storage formats, a million things. Even if Lobsters doesn’t use them, some of those are going to result in conditionals on the hot path to serving even very simple, performant queries like select * from users where id = 123 and account for some of the performance difference. I say it’s sort of an apples-and-oranges comparison because Noria and MySQL have such different featuresets - if it were possible to compile a version of MySQL that dropped support for every feature Lobsters doesn’t use, I wonder if that wouldn’t be in the neighborhood of 5x faster. I have so little intuition for it I wouldn’t be surprised at 1.01x or 20x.

                  Edit: ah, and after I hit post I reloaded the page to see @tobym made this point and you already responded to it. I’ll check out the reddit link. :)

                  1. 3

                    In addition to my response to @tobym, let me try to address some of your specific concerns too. First, Noria already supports multi-machine distribution and sharding, and replication is already nearly done. Noria is also more flexible than MySQL in its data types, since it doesn’t have strict column typing. If we did apply the same schema strictness as MySQL, that would improve our performance, since we could specialize data-structures to known types. While it is true that we don’t support as many data types as MySQL, adding news ones is pretty straightforward, and we already support quite a few. Similarly, adding date and time functions should be straightforward – they are just new projection and filter operations. Noria should also run without modifications on Linux, macOS, and Windows.

                    As for multiple storage formats, Noria is, in a sense, arguing that you as the developer shouldn’t have to think about that. You should tell the database what your queries are, and it should determine how best to persist and cache the data and the query results. Are there particular features associated with the storage systems that you had in mind?

                    You are right though that MySQL does more than Noria does, and that that adds overheads that Noria does not have in some cases. However, most of Noria’s performance advantage comes from the model — computing on write instead of read — as opposed to implementation. MySQL fundamentally has to compute things on reads, whereas Noria does not, and with most operations being reads, that translates to speed-ups that MySQL cannot recover. It would be great to disable lots of MySQL features, but it is unlikely to change the picture much due to this fundamental design difference.

                    The one exception to this is transactions: it could be that transactions are just so expensive to provide, that the MySQL was is just way faster than anything you could achieve in the Noria paradigm. We don’t believe this to be the case though, as we already have a design sketch that adds transactions and strongly consistent reads to Noria while introducing nearly no overhead in the common case.

              1. 15

                As a junior developer doing my best to learn as much as I can, both technically and in terms of engineering maturity, I’d love to hear what some of the veterans here have found useful in their own careers for getting the most out of their jobs, projects, and time.

                Anything from specific techniques as in this post to general mindset and approach would be most welcome.

                1. 33

                  Several essentials have made a disproportionate benefit on my career. In no order:

                  • find a job with lots of flexibility and challenging work
                  • find a job where your coworkers continuously improve themselves as much (or more) than you
                  • start writing a monthly blog of things you learn and have strong opinions on
                  • learn to be political (it’ll help you stay with good challenging work). Being political isn’t slimy, it is wise. Be confident in this.
                  • read programming books/blogs and develop a strong philosophy
                  • start a habit of programming to learn for 15 minutes a day, every day
                  • come to terms with the fact that you will see a diminishing return on new programing skills, and an increasing return on “doing the correct/fastest thing” skills. (e.g. knowing what to work on, knowing what corners to cut, knowing how to communicate with business people so you only solve their problems and not just chase their imagined solutions, etc). Lean into this, and practice this skill as often as you can.

                  These have had an immense effect on my abilities. They’ve helped me navigate away from burnout and cultivated a strong intrinsic motivation that has lasted over ten years.

                  1. 5

                    Thank you for these suggestions!

                    Would you mind expanding on the ‘be political’ point? Do you mean to be involved in the ‘organizational politics’ where you work? Or in terms of advocating for your own advancement, ensuring that you properly get credit for what you work on, etc?

                    1. 13

                      Being political is all about everything that happens outside the editor. Working with people, “managing up”, figuring out the “real requirements’, those are all political.

                      Being political is always ensuring you do one-on-ones, because employees who do them are more likely to get higher raises. It’s understanding that marketing is often reality, and you are your only marketing department.

                      This doesn’t mean put anyone else down, but be your best you, and make sure decision makers know it.

                      1. 12

                        Basically, politics means having visibility in the company and making sure you’re managing your reputation and image.

                        A few more random bits:

                    2. 1

                      start a habit of programming to learn for 15 minutes a day, every day

                      Can you give an example? So many days I sit down after work or before in front of my computer. I want to do something, but my mind is like, “What should I program right now?”

                      As you can probably guess nothing gets programmed. Sigh. I’m hopeless.

                      1. 1

                        Having a plan before you sit down is crucial. If you sit and putter, you’ll not actually improve, you’ll do what’s easy.

                        I love courses and books. I also love picking a topic to research and writing about it.

                        Some of my favorite courses:

                        1. 1

                          I’ve actually started SICP and even bought the hard copy a couple weeks ago. I’ve read the first chapter and started the problems. I’m on 1.11 at the moment. I also started the Stanford 193P course as something a bit easier and “fun” to keep variety.

                    3. 14

                      One thing that I’ve applied in my career is that saying, “never be the smartest person in the room.” When things get too easy/routine, I try to switch roles. I’ve been lucky enough to work at a small company that grew very big, so I had the opportunity to work on a variety of things; backend services, desktop clients, mobile clients, embedded libraries. I was very scared every time I asked, because I felt like I was in over my head. I guess change is always a bit scary. But every time, it put some fun back into my job, and I learned a lot from working with people with entirely different skill sets and expertise.

                      1. 11

                        I don’t have much experience either but to me the best choice that I felt in the last year was stop worrying about how good a programmer I was and focus on how to enjoy life.

                        We have one life don’t let anxieties come into play, even if you intellectually think working more should help you.

                        1. 8

                          This isn’t exactly what you’re asking for, but, something to consider. Someone who knows how to code reasonably well and something else are more valuable than someone who just codes. You become less interchangeable, and therefore less replaceable. There’s tons of work that people who purely code don’t want to do, but find very valuable. For me, that’s documentation. I got my current job because people love having docs, but hate writing docs. I’ve never found myself without multiple options every time I’ve ever looked for work. I know someone else who did this, but it was “be fluent In Japanese.” Japanese companies love people who are bilingual with English. It made his resume stand out.

                          1. 1

                            . I got my current job because people love having docs, but hate writing docs.

                            Your greatest skill in my eyes is how you interact with people online as a community lead. You have a great style for it. Docs are certainly important, too. I’d have guessed they hired you for the first set of skills rather than docs, though. So, that’s a surprise for me. Did you use one to pivot into the other or what?

                            1. 7

                              Thanks. It’s been a long road; I used to be a pretty major asshole to be honest.

                              My job description is 100% docs. The community stuff is just a thing I do. It’s not a part of my deliverables at all. I’ve just been commenting on the internet for a very long time; I had a five digit slashdot ID, etc etc. Writing comments on tech-oriented forums is just a part of who I am at this point.

                              1. 2

                                Wow. Double unexpected. Thanks for the details. :)

                          2. 7

                            Four things:

                            1. People will remember you for your big projects (whether successful or not) as well as tiny projects that scratch an itch. Make room for the tiny fixes that are bothering everyone; the resulting lift in mood will energize the whole team. I once had a very senior engineer tell me my entire business trip to Paris was worth it because I made a one-line git fix to a CI system that was bothering the team out there. A cron job I wrote in an afternoon at an internship ended up dwarfing my ‘real’ project in terms of usefulness to the company and won me extra contract work after the internship ended.

                            2. Pay attention to the people who are effective at ‘leaving their work at work.’ The people best able to handle the persistent, creeping stress of knowledge work are the ones who transform as soon as the workday is done. It’s helpful to see this in person, especially seeing a deeply frustrated person stand up and cheerfully go “okay! That’ll have to wait for tomorrow.” Trust that your subconscious will take care of any lingering hard problems, and learn to be okay leaving a work in progress to enjoy yourself.

                            3. Having a variety of backgrounds is extremely useful for an engineering team. I studied electrical engineering in college and the resulting knowledge of probability and signal processing helped me in environments where the rest of the team had a more traditional CS background. This applies to backgrounds in fields outside engineering as well: art, history, literature, etc will give you different perspectives and abilities that you can use to your advantage. I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                            4. Learn about the concept of the ‘asshole filter’ (safe for work). In a nutshell, if you give people who violate your boundaries special treatment (e.g. a coworker who texts you on your vacation to fix a noncritical problem gets their problem fixed) then you are training people to violate your boundaries. You need to make sure that people who do things ‘the right way’ (in this case, waiting for when you get back or finding someone else to fix it) get priority, so that over time people you train people to respect you and your boundaries.

                            1. 3

                              I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                              The methodology from that talk is here: http://codecrit.com/methodology.html

                              I would change “If the code doesn’t work, we shouldn’t be reviewing it”. There is a place for code review of not-done work, of the form “this is the direction I’m starting to go in…what do you think”. This can save a lot of wasted effort.

                            2. 3

                              The biggest mistake I see junior (and senior) developers make is key mashing. Slow down, understand a problem, untangle the dependent systems, and don’t just guess at what the problem is. Read the code, understand it. Read the code of the underlying systems that you’re interacting with, and understand it. Only then, make an attempt at fixing the bug.

                              Stabs in the dark are easy. They may even work around problems. But clean, correct, and easy to understand fixes require understanding.

                              1. 3

                                Another thing that helps is the willingness to dig into something you’re obsessed with even if it is deemed not super important by everyone around you. eg. if you find a library / language / project you find fun and seem to get obsessed with, that’s great, keep going at it and don’t let the existential “should i be here” or other “is everyone around me doing this too / recommending this” questions slow you down. You’ll probably get on some interesting adventures.

                                1. 3

                                  Never pass up a chance to be social with your team/other coworkers. Those relationships you build can benefit you as much as your work output.

                                  (This doesn’t mean you compromise your values in any way, of course. But the social element is vitally important!)

                                1. 1

                                  This really misses the most obvious argument for using Docker: because it’s easy.

                                  It’s really as simple as that. It saves me time, and I spend that time doing more useful things.

                                  1. 2

                                    The author contrasts “easy” and “simple”, agreeing that Docker may be easy but it doesn’t simplify things. (Inspired by Rich Hickey’s talk https://www.infoq.com/presentations/Simple-Made-Easy which I highly recommend).

                                    1. 2

                                      account suspended. what was that?

                                      1. 4

                                        The tweet was two screenshots.

                                        One of Twitter user @KrangTNelson tweeting (paraphrased) “No thanks, I only get my crypto tips from the guy who made garfield”.

                                        The second was a screenshot of Scott Adams Twitter account showing he had blocked Krang.

                                        No idea why Krang was banned.

                                        1. 1

                                          Best guess is a parody tweet promising “antifa super-soldiers” on November 4th, which some strange people took seriously and complained about. His account’s been restored.

                                      1. 1

                                        I find e/E, b/B very quick, often better then f/F because there’s no need to pick a letter to jump to.

                                        1. 3

                                          I use tig all the time; it’s a great tool. In particular, tig blame mode lets you jump to a line’s parent commit with , (and you can return to the previous state with <). This is great for finding the provenance of a given bit of code.

                                          1. 4

                                            GPG is so simple. You and someone else generate some keys. You exchange them safely somehow validating each other. Then, just write stuff in a text file with boring name, seal it with GPG, and send the resulting file over some medium (eg email). Ignore all other functionality since it’s complicated or requires trusting third parties. Just do one-to-one with text files. The UI problems could even be scripted away or programmed as an extension into an editor.

                                            The result: you get protection the NSA couldn’t break that works on diverse hardware and software (reduces subversion risk). Most people aren’t worried about NSA. Usually a weaker threat. So, something NSA had hard time with should be extra safe for them.

                                            1. 4

                                              The problem is that simple, relatively easy to use work flow isn’t the one advocated. Instead gpg nerds go on about the web of trust and key signing parties and tell people off for doing minor things wrong.

                                              Is there a gpg work flow documented somewhere that is as easy to use as signal and a verified key? I would love to use that.

                                              1. 1

                                                Not that easy yet but simple enough to be made easy. Start with this:

                                                http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/

                                                Here’s the major steps:

                                                Generating key, exporting one’s own public key, importing others’ public keys, encrypting a file for a specific user whose key is in database, or decrypting a file from the user. The front end just needs to be able to handle those actions. The whole thing might be reduced to an open or seal command in a plugin for a text editor for day to day use with extra commands in the menu for generate, import, export, or backup db. Alternatively, a modification of GPG itself to straight-up delete all the other crap or at least the interfaces to it importing the result into a GUI app with better interface.

                                              2. 3

                                                The trouble is that all the boring, trivial UI stuff never gets done. Partly, I suspect, because no-one is ever paid to do it.

                                                1. 2

                                                  the guy developing gpg gets money, more than a lot of free software projects can dream of: https://en.wikipedia.org/wiki/Werner_Koch

                                                  i guess with that money a somewhat usable gui should be possible.

                                                  1. 1

                                                    Remember that he got so little for so long he was thinking of quiting. Then, some emergency money was thrown at him largely without conditions after the press about that. So, it’s not the same as a person just making stuff with money coming in regularly with expectations by users for great UX. He can do it but is not incentivized to do it.

                                                    1. 2

                                                      Even if he was, he’s not a UX expert - that’s something that requires a bit of knowledge, planning, research, and likely a big refactor afterwards.

                                                      1. 1

                                                        Good point. Most programmers, esp for crypto stuff, aren’t UX experts. Hell, we’ve been seeing “Why Johnny Can’t Use My App” papers from them for some time now.

                                                        1. 2

                                                          Of course, he could hire a UX designer, (or firm) but that’s quite a bit of money. Considering GPG’s status though, someone might be willing to do it pro bono.

                                                      2. 2

                                                        usability is a well known reason why people don’t use it. why don’t take a part of the money and pay another developer to build and maintain a nice gui? if the wikipedia article is still correct, the donations of facebook and stripe equal $100000/y. even if it would be reduced to 50k due to taxes, it is still a nice amount of cash in germany: “In Germany, the average household net adjusted disposable income per capita is USD 31 925” http://www.oecdbetterlifeindex.org/topics/income/

                                                    2. 1

                                                      Good point. Alternatively, like with my experience, the programmers just hack together a solution that will work for them and their local audience. Then, don’t put in further effort to develop it into more general solution for wide audience. I didn’t even publish mine since they were very, very specific to my use case.

                                                      1. 6

                                                        keybase has done quite a decent job making a more user-friendly interface to GPG (CLI and GUI).

                                                        1. 4

                                                          Are they still encouraging users to hand over their private keys to them? That puts it in the bad sector as far as I’m concerned; if I’m going to be trusting a central organisation I might as well just use Facebook messenger.

                                                          1. 2

                                                            Good point. I loved the Keybase concept when I last looked into it. Since this topic keeps coming up, I might try out their client in the near future to see if I can offer something better than GPG cheat sheet haha.

                                                    1. 6

                                                      If candidates are so highly sought-after, couldn’t they request extensions on their offers?

                                                      I realize it’s a bit risky, and the real challenge is disseminating this knowledge.

                                                      1. 6

                                                        I suspect the exploding offer or vanishing signing bonus is a tactic that takes advantage of the young candidate’s lack of experience with interviewing and getting hired. They just spend a bunch of money on an expensive degree; it seems foolish to risk giving up a signing bonus or an entire offer to wait for a better offer that may not materialize.

                                                        1. 2

                                                          I’m curious if this signals that they are becoming less sought after – in other words, that companies realize they have the upper hand, and can strongarm candidates. I have no way of knowing, though.

                                                        1. 10

                                                          Up-voting because the original thread and the linked rebuttal are interesting reads.

                                                          What it comes down to is that the OpenBSD developers believe that re-implementing a user-land network stack is silly because of the risk it introduces, and the rebuttal says that is woefully outdated thinking because of the high demand for specialized and dedicated user-land networking. Given OpenBSD’s philosophy and valid point about maintenance costs, I think the rebuttal is unfair. If netmap is critical for some specialized application, couldn’t one go use it on FreeBSD? Expecting distros to have the same philosophies about user- vs kernel-space, generality vs specialization, etc defeats the purpose of having multiple distros to begin with.

                                                          1. 4

                                                            I switched to org-mode a while back, and love it. I used vim and a plain text file very successfully for years before that. My first stab at org-mode failed hard because I tried to use too many features. Now, my working model is similar to the plaintext file, but with handy shortcuts: top-level bullets with the date, notes and TODO items indented below. TODO items are trivially searchable with C-a t. I also customize the TODO states. This file also acts as my engineer’s notebook. That’s it!

                                                            1. 1

                                                              what’s interesting about this?

                                                              1. 1

                                                                It’s a toy implementation of a simple virtual computer, which is a great tool for learning about instruction sets, registers, memory, etc.

                                                              1. 0

                                                                Read this while listening to the best of Hans Zimmer for a truly inspirational read: https://www.youtube.com/watch?v=AAaUoOOUFA4

                                                                1. 4

                                                                  Took me a while to figure out what bothered me about this post – it makes the deployment choice for components (e.g. threads vs processes vs machines) sound almost trivial. It’s anything but. If a component is deployed in-process, perhaps using green or native threads, it’s reasonable to use a blocking, fine-grained API that communications with native domain objects. If it’s deployed as a REST service, this means using an asynchronous API, using a more coarse API to balance out the additional latency, choosing a serialization format, adding more monitoring…the list goes on.

                                                                  For a high-level whiteboard conversation, this assumption is fine. When building a production system, it’s not.

                                                                  1. 1

                                                                    I agree with you that the devil is in the details, and there are a ton of details that need to be considered for us to not care whether something is in-process or not. This can be true for GC pressure, thread pool usage, memory usage, CPU consumption, context switches, practically any resource.

                                                                    On the subject of programming model, I wonder if it might be possible to end up with the best of both worlds.

                                                                    For fine vs coarse graining, we can have an abstraction which automatically batches for us–consider Promise Pipelining (à la Cap'n Proto, or E), or Haxl’s automatic batching (this is redundant, but I don’t have a better name for it). We can imagine a system where we program against a fine non-blocking interface, with the understanding that it will batch it for us, planning the query as efficient as possible.

                                                                    With asynchronous vs synchronous, we should in theory be able to reap the benefits of a synchronous style with an asynchronous style. A sufficiently sophisticated model could figure out that doing it synchronously will reap rewards, and adjust, so that although it’s written in an asynchronous style, it’s actually executing asynchronously.

                                                                    With that said, it definitely depends how far you’re willing to go on the abstraction scale, and it might not be worth it.

                                                                  1. 1

                                                                    I’m learning org mode once and for all.

                                                                    I’ve maintained a text file for years now which is a combination to-do list/engineer’s notebook. Over the last year, I’ve had trouble maintaining a good structure while working on multiple projects at a time. Reducing the number of simultaneous projects is not an option, so I looked at other options including a concerted effort with Evernote. None have fit the bill so far, but I have hopes for org-mode.

                                                                    1. 6

                                                                      Back-pressure is the name of the game when it comes to queueing. I’m interested to see how the reactive streams project turns out.

                                                                      In the systems I work on, fixed-length blocking queues are prevalent. They work well in two ways: 1) when the queue is full, adding to the queue blocks the caller, providing back-pressure; and 2) depending on the type of work, a worker can drain N items from the queue to process in batch, which improves throughput.

                                                                      1. 5

                                                                        I was expecting to read about a replication bug. Turns out to be a very nice, clear explanation about a potentially confusing discrepancy between the stats reported by a redis master and slave.

                                                                        1. 1

                                                                          I’m working on an automated deployment process, starting with packaging up Scala applications into an rpm. Currently packaging uses sbt-assembly and deployment uses scripts managed by puppet.

                                                                          1. 2

                                                                            We use sbt-assembly to package up everything to a .deb which is deployed via Puppet. It’s a Makefile, a Debian rule file and then some .install scripts - pretty straightforward but I wish it were nicer.

                                                                            I’m really hoping the sbt2nix project will make things a bit nicer:

                                                                            https://github.com/charleso/sbt2nix

                                                                          1. 6

                                                                            This is from early 1978; it was before EWD had switched to writing the EWDxxx series entirely in his handwriting, so reading the HTML transcription is better than reading the original PDF.

                                                                            It has a few gems in it:

                                                                            some people found error messages they couldn’t ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate “the ease of programming” with the ease of making undetected mistakes.

                                                                            (PHP is perhaps the modern paragon of this questionable virtue, although Perl, Forth, and assembly language have held the crown previously, and JS has attempted to contest PHP’s position.)

                                                                            the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.

                                                                            This is a case where a one-line aside from an EWD does a better job of describing a subject than the entire Wikipedia article on it.

                                                                            The importance of notation as a tool of thought was a major theme of elite computer science at the time: that was the title of Iverson’s Turing Award lecture about APL the year after this EWD, and had also been the subject of Backus’s rather worse Turing Award lecture the year before, which Dijkstra famously blasted in EWD692. Backus’s lecture and attendant research, despite its serious flaws, inspired much of the work in functional programming during the 1980s, although of course LISP and ISWIM were inspirations from 1959 and 1966, respectively. ISWIM looks a hell of a lot like modern ML.

                                                                            On a related note, I’ve often noticed that our programming languages are very poorly suited for handwriting: they underutilize the spatial arrangement, ideographic symbols, text size variation, and long lines (e.g. horizontal and vertical rules, boxes, and arrows) that we can easily draw by hand, instead using textual identifiers and nested grammatical structure that can easily be rendered in ASCII (and, in the case of older languages like FORTRAN and COBOL, EBCDIC and FIELDATA too.) This makes whiteboard programming and paper pseudocoding unnecessarily cumbersome; even if you do it in Python, you end up having to scrawl out class and while and return and self. self. self. in longhand. Totally by coincidence, this morning on the bus on the way in to work, I was coding Quicksort in a paper-oriented algorithmic notation I’ve been working on, on and off, over the last few years, to solve this problem. I would include a sample here, but I don’t yet have anything digitized.

                                                                            1. 1

                                                                              Can you elaborate on your paper-oriented algorithmic notation? Sounds interesting.

                                                                              1. 2

                                                                                At present I’m using these conventions:

                                                                                • underline for subroutine/function/method definition
                                                                                • double-underline for class definition
                                                                                • I’m still vacillating between using indentation or a vertical line joined to the underline to delimit the extent of the function or class body
                                                                                • a vertical line for iteration (with the while-condition on the left, or the iteration variables and sequence on separate lines for a foreach loop, and the contents on the right). An infinite loop, or loop until break or early return, is indicated by a vertical line with nothing on the left, analogously to C for (;;). I don’t have notation for for (foo; bar; baz).
                                                                                • vertical stacking for sequence (as in Python)
                                                                                • a vertical line with crossing horizontal lines for conditionals (if-elseif-elseif-else or, with an extra horizontal line across the top with an expression on top of it, switch or pattern-matching). The conditions or patterns to match go on the left, with their consequents on the right. The last condition is typically empty to mean else.
                                                                                • ↑ for return, as in Smalltalk
                                                                                • Ruby-like @ for self, with the ability to include @vars in argument lists to set them implicitly from the argument
                                                                                • . for references to methods or other objects' instance variables, like C and its progeny including Java and Python
                                                                                • subscripting for array indexing, as in linear algebra
                                                                                • Python-style a:b for slicing, but with Golang/Numpy non-copying semantic; Alexandrescu convinced me that this kind of thing is the right primitive for generic algorithms on iterators, generators, and sequences in general.
                                                                                • prefix # for getting the length of a slice or other container
                                                                                • ← for mutating assignment (I’m vacillating between = and Golang-style := for declaration-plus-initialization), as in many notations including Smalltalk
                                                                                • horizontal whitespace for function application and argument separation, as in ML or Haskell
                                                                                • weird box structures enclosing arguments for algebraic data type constructors, although sometimes I’ve just used function application notation
                                                                                • comma for tuples, including destructuring tuples, enabling multiple assignment (Lua- or Python-style)

                                                                                So here’s a variant of a common example which can be more or less rendered in Markdown and Unicode:

                                                                                point @x @y
                                                                                ____________

                                                                                @r = √@̅²̅+̅@̅²̅
                                                                                @θ = atan2 @y @x

                                                                                r
                                                                                ↑ @r

                                                                                θ
                                                                                ↑ @θ

                                                                                ⼻ Δx Δy
                                                                                @x, @y ← @x + Δx, @y + Δy

                                                                                Alternatively you could write that last method, whose name is an ideogram for “step”, this way, which is probably how I’d normally do it:

                                                                                ⼻ Δx Δy
                                                                                @x += Δx
                                                                                @y += Δy

                                                                                You can see that it uses very few pen strokes compared to more ASCII-oriented notations, but without sacrificing rigor.

                                                                                What do you think?

                                                                                1. 1

                                                                                  I’ve hacked up some examples with CSS and HTML. It’s pretty imperfect still, but well enough explained to criticize.