1. 8

    I prefer Vim’s approach of using JSON for communicating with external jobs instead of using Messagepack like Neovim does. I’m also not very partial about inventing a new binary format as a vimimfo replacement.

    It is my understanding that Neovim wasn’t available on Windows for quite a while. I’m all for shedding features in order to facilitate building new ones, it might take Neovim in interesting new directions, but then the new project should not be considered a drop-in replacement for the old one.

    (I even kind of like VimScript, despite its weirdness.)

    1. 11

      I’m also not very partial about inventing a new binary format as a vimimfo replacement.

      viminfo is an ad-hoc unstructured format. shada is trivially parsable (and stream-able), because it’s just msgpack. It’s misleading to call it a “new binary format”, it’s just plain old msgpack with some sentinel values for guaranteed round-tripping to/from VimL.

      It doesn’t make sense to prefer viminfo, which isn’t a format at all.

      1. 1

        Anyone using OniVim among us? I’m wondering how does it compare to vim, Atom, VSCode and Sublime Text?

      1. 3

        Could a language like Coq be used for similar purposes than TLA while being able to extract useful programs?

        1. 1

          Yes. If your specification language of choice (and/or a framework built on top of it) can do synthesis and code generation, it’d be able to derive correct implementations of your spec.

        1. 1

          I feel your pain, but this is the efficiency of free market! ;-)

          Whether or not a hyperlink is broken on the web still relies entirely upon the maintenance of the page pointed to, despite all hypertext projects prior to the 1992 Berners-Lee project having solved this problem.

          Great, you made me feel young! :-)

          What are you talking about?

          I cannot imagine how they could fix the arcs of a graph they do not control entirely after modifying a bunch of nodes they own…

          Can you share more details?

            1. 2

              As the resident hypertext crank, pretty much every time I say “hypertext” I’m referring to Project Xanadu. However, Xanadu was only slightly ahead of the twenty or thirty commercial hypertext systems available in the 1980s in solving this problem.

              TBL’s pre-web hypertext system, Enquire, also didn’t have the breaking-hyperlink problem.

              Other than “make addresses permanent”, I don’t think Xanadu’s solutions to this problem are the best ones, personally. I prefer distributed systems over centralized services, and prior to the early 90s, Xanadu addressing schemes were intended for use with centralized services; lately, Xanadu addressing schemes are actually just web addressing schemes, leaning on The Internet Archive and other systems to outsource promises of permanent URLs. I prefer named-data, and the hypertext systems I’ve worked on since leaving Xanadu have used IPFS.

              “Make addresses permanent” is also a demand made but not enforced by web standards. Nobody follows it, so facilities based on the assumption of permanent addresses are broken or simply left unimplemented.

            2. 1

              Modifying hypertext documents is a no-no (even, theoretically, on the web: early web standards consider changing the content pointed to by a URI to be rude & facilities exist that assume that such changes only occur in the context of renaming or cataclysm; widespread use of CGI changed this).

              The appropriate way to do hypertext with real distribution is to replace host-centric addresses (which can only ever be temporary because keeping up a domain name has nonzero cost) with named-data networking (in other words, permanent addresses agnostic about their location) & rely upon the natural redundancy of popular content to invert the current costs. (In other words, the bittorrent+DHT model).

              A modified version of a document is a distinct document & therefore has a distinct address.

              This kind of model would not have been totally unheard of when TBL was designing the web, but it would have been a lot more fringe than it is now. Pre-web hypertext, however, typically had either a centralized database or a federation of semi-centralized databases to ensure links didn’t break. (XU88 had a centralized service but depended on permanent addresses, part of which were associated with user accounts, and explicit version numbering: no documents were modified in place, and diffs were tracked so that links to a certain sequence in a previous version would link to whatever was preserved, no matter how munged, in later versions.)

            1. 7

              Learn Powershell (Register-ScheduledJob,Get-NetAdapter,Get-NetIPAddress,Invoke-WebRequest,Invoke-RestMethod,Export-Csv,Export-Clixml,Out-GridView…)

              Install PSReadLine for readline-like keybindings. https://github.com/lzybkr/PSReadLine

              You can use clink https://mridgers.github.io/clink/ for readline bindings on the vanilla cmd.

              Install ripgrep and fzf.

              Use pageant with putty. Also, I suppose this is common knowlege, but you can launch putty from the command line while overriding some parameters of a saved session, like ‘putty -load “Some Session” 178.128.78.50’

              AutoHotkey can be pretty useful.

              PuTTY seems to not be willing to scroll more than a screenful or two of text

              Does changing the Lines of scrollback in Configuration > Window help things?

              In particular, the mouse cursor goes invisible,

              Is “hide mouse cursor when typing” enabled in Configuration > Window > Appearance?

              1. 9

                Yeah, “don’t fight the platform” would be my advice. You have to relax and let the Windowsness of it all wash over you. Admittedly, I also find this impossibly difficult, but trying to turn Windows into a poor clone of a poor clone of a poor clone of AT&T Unix is probably a losing strategy in the intermediate run.

                1. 2

                  I think you get PSReadLine by default recently, as well as ssh (if you enable it).

                1. 5

                  The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages. One could argue that you should solve the problem on paper first and sketch out the types before writing the implementation, but I find it a good example of how dynamic languages shift our expectations in terms of programming ergonomics.

                  1. 19

                    I’d be very curious to hear what situations you’ve encountered where you were prototyping a solution that you understood well enough to turn into code, but not precisely enough to know its types? I’ve personally found that I can’t write a single line of code – in any language, static or dynamic – without first answering basic questions for myself about what kinds of data will be flowing through it. What questions do you find the language is forcing you answer up-front that you would otherwise be able to defer?

                    1. 7

                      When I have no idea where I’m going I sometimes just start writing some part of the code I can already foresee, but with no clue how anything around it (or even that part itself) will end up looking in the final analysis. I have no data structures and overall no control flow in mind, only a vague idea of what the point of the code is.

                      Then with lax checking it’s much easier to get to where I can run the code – even though only a fraction of it even does anything at all. E.g. I might have some function calls where half the parameters are missing because I didn’t write the code to compute those values yet, but it doesn’t matter: either that part of the code doesn’t even run, or it does but I only care about what happens before execution gets to the point of crashing. Because I want to run the stuff I already have so I can test hypotheses.

                      In several contemporary dynamic languages, I don’t have to spend any time stubbing out missing bits like that because the compiler will just let things like that fly. I don’t need the compiler telling me that that code is broken… I already know that. I mean I haven’t even written it yet, how could it be right.

                      And then I discover what it is that I even wanted to do in the first place as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing… etc. Structures turn out to repeat as the code grows, or bits need to cross-connect, so I discover abstractions suggesting themselves, and I gradually learn what the code wants to look like.

                      The more coherent the code has to be to compile, the more time I have to spend stubbing out dummy parts for pieces of the code I don’t even yet know will end up being part of the final structure of the code or not.

                      It would of course be exceedingly helpful to be able to say “now check this whole thing for coherence please” at the end of the process. But along the way it’s a serious hindrance.

                      (This is not a design process to use for everything. It’s bottom-up to the extreme. It’s great for breaking into new terrain though… at least for me. I’m terrible at top-downing my way into things I don’t already understand.)

                      1. 4

                        That’s very interesting! If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both. So instead of sketching ideas for your program on a napkin, or on a whiteboard, or in a scratch plaintext file, you can do that exploration using a notation which is both familiar to you and easy to adapt into an actual running program. Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational? And that the parts of your program whose types you’re less confident about are also the parts you aren’t quite ready to execute yet?

                        If so, then I think our processes are actually quite similar. I mainly program in languages with very strict type systems, but when I first try to solve a problem I often start with a handwritten sketch or plaintext pseudocode. Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like, so that I’ll be able to easily adapt it when the time comes. But either way, we’re both bypassing any kind of correctness checking until we actually know what it is we’re doing, and only once we reach a certain level of confidence do we actually run or (if the language supports it) typecheck our solution.

                        Let me know if I’ve missed something about your process, but I think I understand the idea of using dynamic languages for prototyping much more clearly now. What always confused me is that the runtime semantics and static types (whether automatically checked or not) of a program seem so tightly coupled that it would be nearly impossible to figure one out without the other, but you seem to be suggesting that when you’re not sure about the types in a section of your program, you’re probably not sure about it’s exact runtime semantics either, and you’re keeping it around as more of a working outline than an actual program to be immediately run. So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

                        1. 3

                          If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both.

                          Yup.

                          Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational?

                          Well… it depends. For the parts that are most fully written out, yes. For the parts that aren’t, no. Neither of which are relevant when it comes to type checking, of course. But at the margins there is this grey area where I have some data structures but I only know half of what they look like. And at least one or two of them shift shape completely as the code solidifies and I discover the actual access patterns.

                          If so, then I think our processes are actually quite similar.

                          Sounds like it. I’d wonder if the different ergonomics don’t still lead to rather different focus in execution (what to flesh out first etc.) so that dynamic vs static still has a defining impact on the outcome. But it sure sounds like there is a deep equivalence, at least on one level.

                          Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like

                          Seems natural, no? 😊 The code is ultimately what you’re trying to get to, so it makes sense to keep the eventual translation distance small from the get-go.

                          So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

                          I had never thought about it this way, but that sounds right to me as well.

                      2. 4

                        You didn’t ask me but I’ll answer anyway because I’d like your advice! I am currently prototyping a data processing pipeline. Raw sensor data comes in at one end then is processed by a number of different functions, each of which annotates the data with its results, before emitting the final blob of data plus annotations to the rest of the system. As a concrete example, if the sensor data were an image, one of the annotations might be bounding boxes around objects detected in the image, another might be some statistics, etc.

                        At this stage in the design, we don’t know what all the stages in the pipeline will need to be. We would like to be able to insert new functions at any stage in the pipeline. We would also like to be able to rearrange the stages. Maybe we will reuse some of these functions in other pipelines too.

                        One way to program this is the “just use a map” style promoted by Clojure. Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function. So each function will accept data that it doesn’t recognize and just pass it on. This makes everything nicely composable and permits the easy refactoring we want.

                        How would this work in a statically typed system? If the pipeline consists of three functions A, B then C, doesn’t B have to be typed such that it only accepts the output of A and produces the input of C? What happens when we add another function between B and C? Or switch the order so A comes last?

                        What would the types look like anyway? Each function needs to output its input plus a bit more: in an OOP language, this quickly becomes a mess of nested objects. Can Haskell do better?

                        Since I cannot actually use Clojure for this project, I’d welcome any advice on doing this in a statically typed language!

                        1. 3

                          In my experience statically typed languages are generally very good at expressing these kinds of systems. Very often, you can express composable pipelines without any purpose-built framework at all, just using ordinary functions! You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values. This approach requires you to explicitly specify your pipeline’s dependency graph, which in my experience is actually extremely valuable because it allows you to understand the structure of your program at a glance. The simplicity of this approach makes it easy to maintain and guarantees perfect type safety.

                          That said, based on your response to @danidiaz, it sounds like you might be doing heavier data processing than a single thread running on a single machine will be able to handle? In that case, depending on the exact kind of processing you’re doing, it’s still possible that you can implement some lightweight parallelism at the function call level without departing too much from modeling your pipeline as an ordinary sequence of function calls. Ordinary (pure) functions are also highly reusable and don’t impose any strong architectural constraints on your system, so you can always scale to a more heavily multi-threaded or distributed environment later without having to re-implement your individual pipeline stages.

                          If you do have to run your system across multiple processes or even multiple machines, then it is definitely harder to express a solution in a type-safe way. Most type systems don’t currently work very well across process or machine boundaries, and a large part of this difficulty stems from that it is inherently challenging to statically verify the coherence of a system whose constituent components might be independently recompiled and replaced while the system is running. I’m not sure how your idiomatic Clojure solution would cope with this scenario either, though, so I’d be curious to learn more about exactly what the requirements of this system are. These kinds of questions often turn out to be highly dependent on subtle details, so I’d be interested to hear more about your problem domain.

                          1. 2

                            You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values.

                            That’s basically what Cleanroom does it its “box structures” that decompose into more concrete boxes. Just functional decomposition. It has semi-formal specifications to go with it plus a limited set of human-verifiable, control-flow primitives. The result is getting things right is a lot easier.

                            1. 1

                              Thank you. Just to emphasize, we are talking about prototyping here. The system I am building is being built to explore possibilities, to find out what the final system should look like. By the time we build the final system, we will have much stricter requirements.

                              I am working on an embedded system. We have limited processing capability on the device itself. We’d like to do as much processing as we can close to the sensors but, we think, we will probably need to off load some of the work to remote systems (e.g. the “cloud”). We also haven’t fixed precisely what on-board processing capability we will have. Maybe it will turn out to be more cost-effective to have a slightly more powerful on-board processor, or maybe it will be helpful to have two independent processors, or maybe lots of really cheap processors, or maybe we should off-load almost everything. I work in upstream research so nothing is set in stone yet.

                              Furthermore, we don’t know precisely what processing we will need to do in order to achieve our goals. Sorry for being vague about “processing” and “goals” here but I can’t tell you exactly what we’re trying to do. I need to be able pull apart our data processing pipeline, rearrange stages, add stages, remove stages, etc.

                              We aren’t using Clojure. I just happen to have been binge watching Rich Hickey videos recently and some of his examples struck a chord with me. We are using C++, which I am finding extremely tedious. Mind you, I’ve been finding C++ tedious for about twenty years now :)

                            2. 2

                              Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function.

                              Naive question: why should functions bother with returning the original map? Why not return only their own results? Could not the original map be kept as a reference somewhere, say, in a let binding?

                              1. 4

                                If your functions pass through information they don’t recognize - i.e. accept a map and return the same map but with additional fields - then what is to be done is completely decoupled from where it is done. You can trivially move part of the pipeline to a different thread, process or across a network to a different machine.

                                You’re absolutely right though, if everything is in a single thread then you can achieve the same thing by adding results to a local scope.

                                At the prototyping stage, I think it’s helpful not to commit too early to a particular thread/process/node design.

                          2. 6

                            I may be too far removed from my time with dynlangs but I’ve always liked just changing a type and being able to very rapidly find all places it matters when things stop compiling and get highlighted in my editor.

                            1. 5

                              The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages

                              Quick bit of notation: “strongly typed” is a subjective term. Generally people use it to mean “statically typed with no implicit type coercion”, but that’s not universal. People often refer to both C and Python as strongly typed, despite one having implicit coercion and the other not having static types.

                              1. 1

                                Thanks for the clarification!

                            1. 1

                              Could work as an alternative to -XUnicodeSyntax in Haskell.

                              1. 2
                                Why?

                                I use Vim for almost everything. I wish I didn’t have to say almost. My usual workflow is to open Vim, write, copy the text out of my current buffer and paste it into whatever application I was just using. vim-anywhere attempts to automate this process as much as possible, reducing the friction of using Vim to do more than just edit code.

                                I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? It would seem like all you’d get would be typing i before you start writing, and pressing the escape button when you’re done, plus maybe a more preferable color theme?

                                Personally, as a guy who generally uses Emacs (and for that reason structurally can’t relate to this issue ^^), I see vi being nice when working on code-like or in some sense structured data, which might or might not have some.concept of words and paragraphs. Configuration files, logs, scripts, etc. things you want to easily and quickly manipulate, on a regular basis. (Maybe that’s the reason I don’t use Vi(m), since I see the editor as a kind of “sword”, with wich you quickly strike once, and change whatever you neee, instead of having it open for extended periods of time, and fully living within it, like Emacs. This is also why I don’t like extending vim, since I want it to stay clean and fast)

                                But back to this project, it seems to me, that when I’m writing stuff outside of my editor or a shell, it isn’t this kind of text vi keybinding are good for. Maybe the author has a different experience, and if that’s the case, I’d be very interested in hearing what “reducing the friction of using Vim to do more than just edit code” is supposed to mean.

                                1. 7

                                  I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? It would seem like all you’d get would be typing i before you start writing, and pressing the escape button when you’re done, plus maybe a more preferable color theme?

                                  • dis deletes a sentence
                                  • '' reverses your last jump
                                  • Search highlights show if you overused a word
                                  • c is great for editing and revisions
                                  • Visual blocks help with adding formatting
                                  • Move by paragraph and move by sentence
                                  1. -1

                                    Most of these tricks seem more like hypothetical advantage, that look great in a list, than real justifications. How often does one reverse their last jump? Or moves the cursor through a text sentence by sentence? Most the other tricks can be more or less easily emulated by a combination of the shift/control and the arrow keys (or on the mac and some GTK versions using built in emacs keybindigs). The “normal” text entry interfaces offered by operating systems are not ot be underestimated, after all.

                                    So unless one says “I don’t want to learn any other keybindings that vi’s”, one could understand why people would use this, but it still doesn’t appear to be a good reason to me.

                                    1. 4

                                      How often does one reverse their last jump?

                                      I go back to previous editing positions all the time in Vim, using g;

                                      Some other Vim features I find useful for prose:

                                      • autocomplete from terms in current document
                                      • ya” to copy quoted text
                                      • dap for cutting and moving paragraps
                                      • zfip for termporarily hiding a paragraph under a fold
                                      • digraphs for inserting em dash—tricky to write otherwise.
                                      1. 2

                                        I use all of these tricks regularly when editing prose. And these were just the ones I immediately thought of when reading your thing. There are plenty of other commands I use all the time. None of them may be strictly necessary but it all adds up to a big quality of life improvement.

                                    2. 7

                                      I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? ….

                                      I’m not a vi-user, as you might guess, but I can say that a proper text editor is a real boon for writing any sort of text, whether code or light fiction. Word processors are such hostile environments for real production of anything, in my experience.

                                      You say you generally use Emacs - don’t you prefer it for non-code things too? I assume vi(m)-people must feel similarly about their paradigm.

                                    1. 2

                                      Continuations have been used to study natural language phenomena: https://www.goodreads.com/book/show/22693619-continuations-and-natural-language