1.  

    Thanks, @hwayne, that was a really interesting video, and right at the level for me (has read something about TLA+, can remember the order those letters come in in the acronym).

    How do you go from a spec to code in production?

    Carefully. Verifying that code matches a spec, aka refinement, is a really hard problem! To accurately implement your spec, you need good engineers, good tooling, and good process. Nothing can take those needs away.

    This seems like an opportunity: not to automate the thinking, but to automate getting to the point where the thinking can be applied. Are there tools that can go “up” the abstraction stack from code (the most refined version of the spec) to more abstract forms?

    I think it’d be useful both in itself, and as a way to help adopt this way of thinking, if there were a way to compare the designed spec to a derived spec, even if just at the “not glaringly different” level. It seems like it might be something a symbolic executor could produce from the code?

    1.  

      You don’t have to write JavaScript, you have to write Elixir - which has a much smaller community around it than JavaScript does.

      This does look cool though, I just wish there were some live examples I could play with in my browser.

      1.  

        The WYSIWYG editing requires that you’re creating poorly structured single-purpose content.

        I disagree. It might make it harder, and you have to be more disciplined to do so, but using a WYSIWYG editor does not in itself preclude well-structured content.

        1.  

          Yes, and I hope no one thinks that. Roughly, unit tests check if the implementation matches the intent and integration tests check if the different implementations work together to match the intent.

          Your points 1 - 3 typically involve no implementation - they are product identification and development. When we iterate on a solution the boundaries blur a bit as we learn more about the technology as applied to our use case, but we should largely keep them separate.

          There are ways to involve testing in 1 - 3 but these are typically very human driven, though I think for 3 there are formal methods which require one to - basically - write code. (TLA for example) (https://en.wikipedia.org/wiki/Formal_methods)

          1.  

            If they are referring to non-interactive programs with color output, and even shells with readline-style editing, then it’s better to port to standard VT protocols. Input is not much an issue for such programs, benefits are using more standard interface and being usable from SSH.

            For heavyweight TUI programs it might be better to stay with WinAPI, for the same reason that many users of Emacs prefer its GUI version (even on unixes).

            1.  

              don’t really get what arguments there are for not having DNSSEC

              The benefits don’t outweigh the issues, so it’s not worth it.

              • DNSSEC’s security ultimately is in the hands of whoever runs a top level domain. Not all domains are run with the same level of, let’s say, competency, so depending on the top level domain, DNSSEC might provide little to no additional security while pretending otherwise.
              • DNS resolution is very low level in the OSes to the point where there’s little option for error reporting. Any certificate chain validation error will be seen as a failure to resolve a name by client applications and there’s no API to provide more details. This leads to hard to debug issues.
              • DNSSEC is using late-90ies levels of key-strengths and algorithms and upgrading those is very hard and requires cooperation between all of the DNSSEC users at once (read: is unlikely to happen).
              • DNSSEC provides no encryption, so comes with zero privacy benefits for users.

              The increased maintenance issues and the non-existent error reporting for end users are bad enough issues that can’t be compensated by what practically amounts to only little more than security theatre.

              1.  

                recommendation to start writing (or update existing) Command-Line apps to emit VT enriched text, rather than calling Win32 Console APIs to control/format Console output: VT-enabled apps will not only benefit from rendering improvements like those described above (with more on the way), but they’ll also enjoy richer features available via VT-sequences, like 24-bit color support, and default foreground/background color support.

                Do they recommend to update input handling too? I don’t see problems with VT-style output, but input is messy, compared to WinAPI. I can’t imagine Far Manager to be rewritten to use VT-like input, with quirky alt/meta key and escape key repurposed both for sending escape codes and for “cancel” action. (On the positive side, it will be usable through ssh and closer to porting to other OSes).

                (At first I thought it was about XBox, referring to terminal as “console” is one of these strange Microsoft terms)

                1.  

                  According to Carl Sagan, though he did not use that exact phrase.

                  1.  

                    Fair enough. Perhaps I read your message as being more negative than it was intended to be. I, too, am hopeful for the continued dividends of further human-computer symbiosis, and would love to contribute if I can.

                    Also, as a side note, I would be surprised if you truly do not use GUI text editors. I include vim in this class (but not ed), since it very much has a GUI, even if it is displayed through a terminal.

                    1.  

                      That’s a very different claim, and still a very pessimistic one. Human life was possible in an ice age, I see no reason it would be impossible with average temperature 2C higher. It will certainly strain every government and society, but there’s a large distance between strain and extinction.

                      1.  

                        They’d like that over on /r/socialistprogrammers too.

                        1.  

                          I use Nix at the moment, but briefly dabbled with Guix. Here is a summary of the most obvious differences I came across as a new user:

                          • At least initially, I found making minor changes to packages easier in Guix as it uses guile, which means there’s basically very little syntax to learn; it’s just a lisp with a package managing API. Nix, for me at least, had a slightly steeper initial learning curve, although having invested some time in learning the basics I do prefer the Nix expression language.
                          • Nix has a much more extensive set of packages in its repository, nixpkgs. I believe the Guix package set was originally generated from nixpkgs, but they aren’t synchronised.
                          • At least when I last tried it, Guix was quite a bit slower than Nix. Probably less of an issue once you’re all set up and only occasionally need to modify your installed packages, but quite frustrating if you’re experimenting, or writing packages for the first time.
                          • Guix has a slightly different set of tools with a slightly different focus to Nix’s. See for example, guix pack (nix-bundle exists, but it’s less mature), guix challenge, guix weather.
                          • I feel that Guix’s commands present a more consistent and intuitive interface to the user than Nix’s do. For example, Guix has a single executable, guix, with lots of subcommands, guix package, guix build, etc.. Nix has lots of separate executables, nix-env, nix-build, nix-instantiate, nix-shell, and then as of Nix 2.x, also has the nix executable with various subcommands such as nix build. I believe this fragmentation and duplication is due to a gradual transition to the Nix 2 interface. I assume the eventual goal is for nix to do everything via subcommands, deprecating the others.
                          • Guix documentation is more comprehensive and better organised. Nix has some good documentation, but it hasn’t been kept up to date. For example, the Nix Manual doesn’t appear to document the new nix command at all.
                          • In theory at least, Nix and Guix use the same store format and can share a store.
                          1.  

                            I certainly ran Emacs Lisp on a 386 SX-16, and it ran fine. I didn’t happen to run Common Lisp on it, mainly because I wasn’t into it, or maybe there were only commercial implementations of it back then. But I would be pretty surprised if reasonable applications in CL weren’t viable on a 386 or 68020 in 1990. Above-average amounts of RAM were helpful (4 or 8 MB instead of 1 or 2).

                            1.  

                              I don’t really get what arguments there are for not having DNSSEC. But then, the world is complex and has quite a few people.

                              Good to see that there is an actual working revocation mechanism, though - including … well done and very clear.

                              1. 5

                                Most Windows Applications use the console API which sucks because it doesn’t support basic functionality that UNIX-based environments have had for years

                                As a developer, I often feel the opposite. The unix terminal is a crippled mess, while the Windows console works fairly easily. Look at keyboard input, for example. The Windows API will return nice key codes with modifier flags. It’ll even tell you key up events if you care! Unix terminals can only send you character events easily, and the rest of the keyboard is a mix of various sequences that may or may not give you the modifiers. Assuming the sequences are all xterm or using ncurses etc kinda sorta takes care of the various sequences. Kinda. Sorta. They still frequently don’t work or get broken up or have awkward delays (you can’t assume the esc char is the escape key without checking for more input). Some terminals send stuff you cannot detect without a full list.

                                Oh, and try to treat shift+enter differently than enter in a Unix terminal program. Some terminals send codes to distinguish it… which can make programs do random stuff when they don’t recognize it.

                                On Windows, the structs are tagged. You can easily skip messages you don’t want to handle. Of course, both Windows and unix are based on some underlying hardware and I appreciate the history of it, but…

                                …I loathe the VT “protocol”. …but alas, rewriting all those programs isn’t going to happen, so I get it. And I know there’s some advantages. I like the pty thing as much as the next person - it is cool to be able to easily redirect all this stuff, and it is reasonable efficient in terms of bytes transferred, and the cooked mode has potential to massively decrease latency (though most programs turn that off as soon as the can since it sucks in practice)… I just wish we were redirecting a sane, well-defined protocol instead, and I prefer the console API as the base features.

                                Oh well. Hopefully everyone will at least just agree to use xterm escape sequences everywhere.

                                1.  

                                  I can’t say nothing more than ditto.

                                  1.  

                                    The change has been reverted with the message “The world is not ready for dnssec enabled by default.”

                                    1.  

                                      I read up on the control characters yesterday, especially I paid attention to the rfc20 because the codes seem to have remained mostly same since then and that document is the easiest to comprehend.

                                      The transmission and device control characters seem to be safest to use in text streams. They are used as control signals in terminal software but otherwise they wouldn’t seem to cause anything in file output. Therefore I probably can use the following block of characters:

                                      [01-06]     SOH STX ETX EOT ENQ ACK
                                      [10-16] DLE DC1 DC2 DC3 DC4 NAK SYN
                                      

                                      Emacs, nano, vim, gedit seem to be gladly printing these characters and still encode the file as utf-8. I also opened a Linux console and ‘cat’ these in. There were none sort of visible effects, so I guess it’s safe to reuse these codes.

                                      Most transmissions seem to assume that the stream is binary and that anything goes over, so I doubt this would have too many negative effects. Except that of course some other software could not reuse them anymore. Maybe it doesn’t hurt to provide some small file header that’s easy to type with a text editor.

                                      I don’t need this many, so I will probably leave the DC1-DC4 without interpretation and only reuse the transmission control characters on this range.

                                      And regarding DreamWeaver.. I think it’s a bit different thing. I’m not after a WYSIWYG editor, but rather something that is good for content processing yet leaves files suitable to examine with a good old text editor.

                                      The WYSIWYG editing requires that you’re creating poorly structured single-purpose content.

                                      1.  

                                        I never understand Microsoft’s approach to the terminal. It is the single most important reason I prefer a Linux environment. I can literally put a USB in my desktop, enter three lines of code (download my own install script, make it executable, and execute it), and come back half an hour later to an installed, configured desktop.

                                        Meanwhile, Powershell sucks big time. If you enter a non-existent command the background color of the error message doesn’t even match the background of the terminal window. This is something that a junior dev should be able to fix in a couple of hour. Not something that’s in the most important automation tool in the biggest OS made by one of the richest companies on earth. While it’s not important, it is typical of the attitude that Microsoft has towards empowering users.

                                        It’s good they are finally deciding to drop some policies which are mostly harmful for themselves. They are finally jumping on the open source bandwagon, even though they are extremely late to the party (like you can expect from a company that big and presumably bureaucratic).

                                        1. 7

                                          I think the DOD-STD-2167 waterfall requirement came from a desire to harmonise the software development with the hardware development, and probably originated in thinking of projects where the software is a tiny part of the whole.