1. 1

    As a genuine question from someone who hasn’t used procedural programming productively before, what would be the benefits of a procedural language to justify its choice?

    1. 3

      I would say less conceptual/cognitive overhead, but I don’t know if that’s something that can be said of this language as a whole, as I have no experience with it.

      By that I mean something like: I have a rough idea of what code I want from the compiler, how much mental gymnastics is required to arrive at the source-level code that I need to write?

      I would imagine that’s an important consideration in a language designed for game development.

      1. 4

        Yeah, it makes perfect sense.

        To dumb down Kit’s value prop, it’s a “Better C, for people who need C (characteristics)”.

      2. 2

        On top of alva’s comment, they compile fast and are easy to optimize, too.

        1. 1

          I looked this up for some other article on lobste.rs. I found wikipedia to have a nice summary

          https://en.wikipedia.org/wiki/Procedural_programming

          Imperative programming

          Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the “turtle” in the Logo programming language.

          Often, the terms “procedural programming” and “imperative programming” are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.

          My understanding is that if you use say C you are basically using procedural language paradigms.

          1. 2

            Interesting. So basically what was registering in my mind as imperative programming is actually procedural.

            Good to know. Thanks for looking it up!

            1. 2

              I take “imperative” to mean based on instructions/statements, e.g. “do this, then do that, …”. An “instruction” is something which changes the state of the world, i.e. there is a concept of “before” and “after”. Lots of paradigms can sit under this umbrella, e.g. machine code (which are lists of machine instructions), procedural programming like C (where a “procedure”/subroutine is a high-level instruction, made from other instructions), OOP (where method calls/message sends are the instructions).

              Examples of non-imperative languages include functional programming (where programs consist of definitions, which (unlike assignments) don’t impose a notion of “before” and “after”) and logic programming (similar to functional programming, but definitions are more flexible and can rely on non-deterministic search to satisfy, rather than explicit substitution)

              1. 1

                If functional programs don’t have a noton of before and after, how do you code an algorithm? Explain newton’s method as a definition.

                  1. 1

                    both recursion and iteration say “do this, then do that, then do … “. And “let” appears to be assignment or naming so that AFTER the let operation a symbol has a meaning it did not have before.

                    open some namespaces
                    open System
                    open Drawing    
                    open Windows.Forms
                    open Math
                    open FlyingFrog
                    

                    changes program state so that certain operations become visible AFTER those lines are executed, etc.

                    1. 3

                      It is common for computation to not actually take place until the result is immediately needed. Your code may describe a complicated series of maps and filters and manipulations and only ever execute enough to get one result. Your code looks like it describes a strict order the code executes in, but the execution of it may take a drastically different path.

                      A pure functional programming language wouldn’t be changing program state, but passing new state along probably recursively.

                      1. 1

                        but you don’t really have a contrast with “imperative” languages - you still specify an algorithm. In fact, algorithms are all over traditional pure mathematics too. Generally the “state” being changed is on a piece of paper or in the head of the reader, but …

                      2. 1

                        so that AFTER the let operation

                        If we assume that let is an operation, then there is certainly a before and an after.

                        That’s not the only way to think about let though. We might, for example, treat it as form of linguistic shorthand; for example treating:

                        let x = somethingVeryLongWindedInvolving y in x * x
                        

                        as a shorthand for:

                        (somethingVeryLongWindedInvolving y) * (somethingVeryLongWindedInvolving y)
                        

                        There is no inherent notion of before/after in such an interpretation. Even if our language implements let by literally expanding/elaborating the first form into the second, that can take place at compile time, alongside a whole host of other transformations/optimisations; hence even if we treat the expansion as a change of state, it wouldn’t actually occur at run time, and thus does not affect the execution of any algorithm by our program.

                        Note that we might, naively, think that the parentheses are imposing a notion of time: that the above tells us to calculate somethingVeryLongWindedInvolving y first, and then do the multiplication on the results. Call-by-name evaluation shows that this doesn’t have to be the case! It’s perfectly alright to do the multiplication first, and only evaluate the arguments if/when they’re needed; this is actually preferable in some cases (like the K combinator).

                    2. 2

                      If functional programs don’t have a noton of before and after, how do you code an algorithm?

                      Roughly speaking, we define each “step” of an algorithm as a function, and the algorithm itself is defined as the result of (some appropriate combination of) those functions.

                      As a really simple example, let’s say our algorithm is to reverse a singly-linked-list, represented as nested pairs [x0, [x1, [x2, ...]]] with an empty list [] representing the “end”. Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty. Here’s an implementation in Javascript, where reverseAlgo is the algorithm I just described, and reverse just passes it the new empty list:

                      var reverse = (function() {
                        function reverseAlgo(result, input) {
                          return (input === [])? result : reverseAlgo([input[0], result], input[1]);
                        };
                        return function(input) { return reverseAlgo([], input); };
                      })();
                      

                      Whilst Javascript is an imperative language, the above is actually pure functional programming (I could have written the same thing in e.g. Haskell, but JS tends to be more familiar). In particular, we’re only ever defining things, in terms of other things. We never update/replace/overwrite/store/retrieve/etc. This style is known as single assignment.

                      For your Newton-Raphson example, I decided to do it in Haskell. Since it uses Float for lots of different things (inputs, outputs, epsilon, etc.) I also defined a bunch of datatypes to avoid getting them mixed up:

                      module Newton where
                      
                      newtype Function   = F (Float -> Float)
                      newtype Derivative = D (Float -> Float)
                      newtype Epsilon    = E Float
                      newtype Initial    = I Float
                      newtype Root       = R (Float, Function, Epsilon)
                      
                      newtonRaphson :: Function -> Derivative -> Epsilon -> Initial -> Root
                      newtonRaphson (F f) (D f') (E e) (I x) = if abs y < e
                                                                  then R (x, F f, E e)
                                                                  else recurse (I x')
                      
                        where y  = f x
                      
                              x' = x - (y / f' x)
                      
                              recurse = newtonRaphson (F f) (D f') (E e)
                      

                      Again, this is just defining things in terms of other things. OK, that’s the definition. So how do we explain it as a definition? Here’s my attempt:

                      Newton’s method of a function f + guess g + epsilon e is defined as the “refinement” r of g, such that f(r) < e. The “refinement” of some number x depends on whether x satisfies our epsilon inequality: if so, its refinement is just x itself; otherwise it’s the refinement of x - (f(x) / f'(x)).

                      This definition is “timeless”, since it doesn’t talk about doing one thing followed by another. There are causal relationships between the parts (e.g. we don’t know which way to “refine” a number until we’ve checked the inequality), but those are data dependencies; we don’t need to invoke any notion of time in our semantics or understanding.

                      1. 2

                        Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty.

                        Algorithms are essentially stateful. A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program. A “functional” language relies on a smaller set of control mechanisms to reduce, in theory, the complexity of algorithm specification, but “recursion” specifies what to do when just as much as a “goto” does. Single assigment may have nice properties, but it’s still assignment.

                        To me, you are making a strenuous effort to obfuscate the obvious.

                        1. 3

                          Algorithms are essentially stateful.

                          I generally agree. However, I would say programming languages don’t have to be.

                          When we implement a stateful algorithm in a stateless programming language, we need to represent that state somehow, and we get to choose how we want to do that. We could use successive “versions” of a datastructure (like accumulating parameter in my ‘reverse’ example), or we could use a call stack (very common if we’re not making tail calls), or we could even represent successive states as elements of a list (lazy lists in Haskell are good for this).

                          A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program.

                          I don’t follow. I think it’s perfectly reasonable to say that Prolog code encodes algorithms. How does Prolog’s use of a “universal algorithm” (depth-first search) imply that Prolog code isn’t algorithmic? Every programming language is based on “a kind of universal algorithm”: Python uses a bytecode interpreter, Haskell uses beta-reduction, even machine code uses the stepping of the CPU. Heck, that’s the whole point of a Universal Turing Machine!

                          “recursion” specifies what to do when just as much as a “goto” does.

                          I agree that recursion can be seen as specifying what to do when; this is a different perspective of the same thing. It’s essentially the contrast between operational semantics and denotational semantics.

                          I would also say that “goto” can be seen as a purely definitional construct. However, I don’t think it’s particularly useful to think of “goto” in this way, since it generally makes our reasoning harder.

                          To me, you are making a strenuous effort to obfuscate the obvious.

                          There isn’t “one true way” to view these things. I don’t find it “strenuous” to frame things in this ‘timeless’ way; indeed I personally find it easier to think in this way when I’m programming, since I don’t have to think about ‘time’ at all, just relationships between data.

                          Different people think differently about these things, and it’s absolutely fine (and encouraged!) to come at things from different (even multiple) perspectives. That’s often the best way to increase understanding, by find connections between seemingly unrelated things.

                          Single assigment may have nice properties, but it’s still assignment.

                          In name only; its semantics, linguistic role, formal properties, etc. are very different from those of memory-cell-replacement. Hence why I use the term “definition” instead.

                          The key property of single assignment is that it’s unobservable by the program. “After” the assignment, everything that looks will always see the same value; but crucially, “before” the assignment nothing is able to look (since looking creates a data dependency, which will cause that code to be run “after”).

                          Hence the behaviour of a program that uses single assignment is independent of when that assignment takes place. There’s no particular reason to assume that it will take place at one time or another. We might kid ourselves, for the sake of convenience, that such programs have a state that changes over time, maybe going to far as to pretend that these hypothetical state changes depend in some way on the way our definitions are arrangement in a text file. Yet this is just a (sometimes useful) metaphor, which may be utterly disconnected from what’s actually going on when the program (or, perhaps, a logically-equivalent one, spat out of several stages of compilation and optimisation!).

                          Note that the same is true of the ‘opposite’ behaviour: garbage collection. A program’s behaviour can’t depend on whether or not something has been garbage collected, since any reference held by such code will prevent it from being collected! Garbage collection is an implementation detail that’s up to the interpreter/runtime-system; we can count on it happening “eventually”, and in some languages we may even request it, but adding it to our semantic model (e.g. as specific state transitions) is usually an overcomplication that hinders our understanding.

                          1. 1

                            A lot of what you see as distinctive in functional languages is common to many non-functional languages. And look up Prolog - it is a very interesting alternative model.

                            1. 1

                              A lot of what you see as distinctive in functional languages is common to many non-functional languages.

                              You’re assuming “what I see”, and you’re assumption is wrong. I don’t know where you got this idea from, but it’s not from me.

                              I actually think of “functional programming” as a collection of styles/practices which have certain themes in common (e.g. immutability). I think of “functional programming languages” as simply those which make programming in a functional style easier (e.g. eliminating tail calls, having first-class functions, etc.) and “non-functional programming languages” as those which make those styles harder. Most functional programming practices are possible in most languages.

                              In other words, I agree that “A lot of [features of] functional languages is common to many non-functional languages”, but I have no idea why you would claim I didn’t.

                              Note that in this thread I’ve not tried to claim that, e.g. “functional programming languages are better”, or anything of that nature. I was simply stating the criteria I use for whether to call a style/language “imperative” or not; namely, if its semantics are most usefully understood as executing instructions to change the state of the (internal or external) world.

                              And look up Prolog - it is a very interesting alternative model.

                              I’m well aware of Prolog. The research group I was in for my PhD did some fascinating work on formalising and improving logic programming in co-inductive settings; although I wasn’t directly involved in that. For what it’s worth I’m currently writing a project in Mercury (a descendent of Prolog, with static types among other things).

                1. 1

                  So procedural languages are similar to imperative languages, but with somewhat more abstraction?

              1. 5

                What’s the point of using DocBook over HTML5? Every one of the elements described maps 1:1 to an HTML5 element. Is it because there’s a larger set of tools to take DocBook and typeset it for printing? Or is the verbosity of DocBook a usability advantage (<para> V.S. <p>)?

                1. 3

                  Great question!

                  DocBook has lots of tools for rendering, to much more than just HTML: Literal books, PDFs, manpages, knowledgebases, even special formats for some editors to find and interpret and provide contextual help on a project.

                  DocBook has special tags to help represent EBNF and functions and types and GUIs and error messages and function arguments and command line program options and variable names and and and and.

                  Yes, every element described maps 1:1 but there are hundreds more which are undescribed by this post and are useful for large documentation projects.

                  Edited to say: much of this is not strictly necessary for getting started and writing some docs, which is the goal of this document. It is so easy to look at the massive amount of tags and try and pick the most perfect semantics for your writing when the truth is having any docs being contributed is much more important. Leave the precise and fiddly semantics to the maintainers and PR reviewers. Let’s just write some docs.

                  1. 2

                    I’ve been considering building a system to auto-generate documentation from the source code and comments of different languages into a common format with no styling information. Would DocBook be a good format-of-record for my project?

                    I’d like to use the preferred tool for each language to extract comments-based docs, and then a single new tool to combine source-derived documentation with human-authored guides into a final presentation format:

                    • Ruby -> YARD -> DocBook
                    • Java -> Javadoc -> DocBook
                    • Lang -> Native tool for Lang -> DocBook
                    • Markdown -> DocBook

                    Then at the end, I can take all the DocBook and unify the presentation style:

                    • DocBook -> HTML
                    1. 1

                      You could try integrating pandoc, it has a Lua or Haskell API, and an internal intermediate representation: https://pandoc.org/index.html

                    2. 1

                      Follow up questions: what toolchain do you use?

                      And are there styles for rendering docbook that feel satisfactory for people used to latex typesetting?

                  1. 7

                    Bad idea, it should error or give NaN.

                    1/0 = 0 is mathematically sound

                    It’s not mathematically sound.

                    a/b = c should be equivalent to a = c*b

                    this fails with 1/0 = 0 because 1 is not equal to 0*0.

                    Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.

                    There is a subtlety because some people say (X) and others say (Y)

                    • (X) a/b = c should be equivalent to a = c*b when the LHS is well defined

                    • (Y) a/b = c should be equivalent to a = c*b when b is nonzero

                    If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.

                    It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.

                    It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                    1. 14

                      I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.

                      1. 8

                        This is explicitly addressed in the post. Do you have any objections to the definition given in the post?

                        1. 13

                          I cover that exact objection in the post.

                          1. 4

                            It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values

                            That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.

                            1. 4

                              Those truthy/falsey values are an often source of errors.

                              I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.

                            2. 4

                              1/0 is integer math. NaN is available for floating point math not integer math.

                              1. 2

                                It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                                I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.

                                1. 7

                                  It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.

                                  https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff

                              1. 4

                                I’m interested in the pretty impressive performance delta – I wouldn’tve thought that Zen could outperform Broadwell quite so handily!

                                1. 2

                                  Me too! I’ll be completely honest: I have no idea what factors contributed here. Maybe things like no NUMA? a bit more cache? Something with Spectre / Meltdown? No idea – not my forte – but I am sure delighted by it.

                                  1. 8

                                    EPYC is way more NUMA than Intel equivalents. EPYC has four dies on one package, and each die is a NUMA domain.

                                    But Meltdown mitigations are indeed usually only turned on for Intel! :)

                                    1. 1

                                      Disk possibly?

                                  1. 15

                                    Rust meanwhile notes that you can’t safely write a performant data structure in Rust, so they urge you not to do that.

                                    The interesting thing to me is the linked FAQ (https://www.rust-lang.org/en-US/faq.html#can-i-implement-linked-lists-in-rust) literally doesn’t say that.

                                    It says:

                                    1. Efficient implementations of many data structures are provided by the standard library.
                                    2. Most data structures can be done in safe Rust, but for comparable performance, some would be better implemented using unsafe Rust. Not that it is impossible.
                                    3. Goes on to provide a specific example of how to do a doubly linked list in safe Rust, and describes how to do it also with unsafe Rust for better performance.

                                    I wonder if this was an oversight or misunderstanding?

                                    1. 10

                                      As a follow-up, in the conclusion you say:

                                      I think that in practice they may not be making real life shipped code a lot more secure - also because not that much actual Rust code is shipping.

                                      While just one of the undoubtedly many examples which could be brought up, I hadn’t realized the Quantum CSS engine from Firefox was so short! More seriously, the achievements in Firefox are remarkable and inspiring, and is a large amount of code shipping to real users, and used every day.


                                      One thing I like very much about the borrow checker is it took memory access problems and turned it in to a generic resource safety problem. Using the simple primitives available I’m able to easily encode usage requirements, limitations, and state changes through borrows and Drops and have them checked at compile time. This is really powerful, and I very much appreciate it.

                                      For whatever it is worth, I’m a rubbish C dev – not to be trusted to write a single line – who has found Rust to be an comfortable and pleasant experience in only a few weeks of free-time practice.

                                      1. 2

                                        Hi - I worded this incorrectly. What I meant to say was that the FAQ says performance will disappoint unless you go into unsafe mode. “For example, a doubly-linked list requires that there be two mutable references to each node, but this violates Rust’s mutable reference aliasing rules. You can solve this using Weak, but the performance will be poorer than you likely want. With unsafe code you can bypass the mutable reference aliasing rule restriction, but must manually verify that your code introduces no memory safety violations.”. I’ve updated the wording a bit. Apologies for the confusion.

                                        1. 6

                                          It is improved, but they don’t urge you to not do it. However, still, unsafe Rust is still safer than C.

                                          1. 3

                                            Oh boy, what a quote.

                                            1. 4

                                              It definitely is in context of bigger picture. The default in C for doing Rust’s level of safety is separation logic with tools like VCC. Hard to learn, slow to develop (2loc/day at one point), and solvers likely slower than compiler checks. Rust brought that level of safety to most apps using a simplified model and checker. Both the checker and resulting code usually perform well. The quote indicates it can’t handle some optimized, low-level, data structures. Those atypical cases will need verification with external tool or method.

                                              In light of that, Rust gets safer code faster than C in most verification cases but maybe equivalent labor in some. Rust still seems objectively better than C on being safe, efficient, and productively so.

                                              1. 2

                                                there are languages in which linked lists are primitives or maybe even invisible. But if you are going to specify a language for writing programs that include linked lists, you should not have to use pragmas. This is a characteristic computer science trope: “we have a rigorous specification of this which cannot ever be violated unless you push the magic button”. It’s a way of acting as if you have solved a problem that you have simply hidden.

                                        1. 1

                                          Very cool program! So exciting to see .nix files already there!

                                          1. 1

                                            Glad you like it :) All the .nix credit goes to nmattia, but let me know in the issue tracker if you run into trouble. Afaik it only works with the unstable channel (and we didn’t pin the nixpkgs version yet).

                                          1. 1

                                            Nice upgrades to 2TB SSDs.

                                            Personally find it funny how OpenGrok is so bloated that it still has to run on the spindles, hugging along next to the backups — even a 512GB SSD is no fit when you’re dealing with Enterprise-level software written in Java. :-)

                                            1. 1

                                              I wonder if they’ve examined Hound – https://github.com/etsy/hound – I’ve found it to be much more performant when compared to OpenGrok, while still providing excellent results.

                                            1. 3

                                              Is the only difference between Guix and Nix the language? I know Nix is more mature and has a bigger community with more packages, but I don’t see any user-facing changes between the two.

                                              1. 4

                                                I guess there are many differences? One important one is the license. The FSF prefers Guix and GuixSD over Nix and NixOS.

                                                1. 1

                                                  that’s…not really…a difference in the technology

                                                  1. 3

                                                    Oh, you wanted only technological differences? Sorry about that. :)

                                                2. 3

                                                  Guix is or was based on the Nix daemon and essentially just a fork, substituting the Nix language for scheme, plus the requirements for packaging. This was several years ago now, it may have diverged further.

                                                1. 1

                                                  I wonder if the Firefox build team has considered exploring Nix for allowing the builders to be internet-free, but without bundling dependencies in the repo.

                                                  1. 3

                                                    Does Nix work on Windows? Firefox build team must produce Windows binary, in fact, it is the most important build in terms of users.

                                                    1. 1

                                                      My understanding is that it works on the WSL, but that’s not real Windows.

                                                  1. 3

                                                    Way to go Domen! I completely agree, the Nix ecosystem needs tools like Cachix to support Nix in production and at small companies. I’m delighted to see this released, and look forward to giving it a try this weekend!

                                                    1. 22

                                                      May I recommend putting in paragraph zero, “Use shellcheck, dummy!”?

                                                      1. 3

                                                        It’s in the readme. The linked document is meant as an addendum. I’ll think about it.

                                                        Update: Added a preface.

                                                      1. 7

                                                        That’s interesting that the company behind it is CZ.NIC the owner/operator of the .cz domain name!

                                                        1. 5

                                                          And also the authors of Knot, the DNS services behind 1.1.1.1.

                                                        1. 6
                                                          1. 4

                                                            Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

                                                          1. 8

                                                            Interesting! Did you consider using expect to implement this? I’ve seen some pretty wild implementations using expect!

                                                            1. 2

                                                              I did not! I am not familiar with expect. How might it help?

                                                              1. 10

                                                                expect is a great program for driving interactive programs however you choose, check this out.

                                                                Here is test.expect:

                                                                spawn bash
                                                                
                                                                set timeout 1
                                                                
                                                                send "echo input1 | rev\n"
                                                                expect {
                                                                  "1tupni" {
                                                                    puts "Got 1tupni!"
                                                                  }
                                                                  timeout {
                                                                    puts "didn't get 1tupni soon enough..."
                                                                    exit 1
                                                                  }
                                                                }
                                                                
                                                                
                                                                send "echo input2 | rev\n"
                                                                expect {
                                                                  "2tupni" {
                                                                    puts "Got 2tupni!"
                                                                  }
                                                                  timeout {
                                                                    puts "didn't get 2tupni soon enough..."
                                                                    exit 1
                                                                  }
                                                                }
                                                                
                                                                # Note I used `input3` here but look for `input4` 
                                                                send "echo input3 | rev\n"
                                                                expect {
                                                                  "4tupni" {
                                                                    puts "Got 4tupni!"
                                                                  }
                                                                  timeout {
                                                                    puts "didn't get 4tupni soon enough..."
                                                                    exit 1
                                                                  }
                                                                }
                                                                
                                                                exit 0
                                                                

                                                                And running it:

                                                                Morbo> expect ./test.expect
                                                                spawn bash
                                                                echo input1 | rev
                                                                
                                                                [grahamc@Morbo:~/projects/student-programs]$ echo input1 | rev
                                                                1tupni
                                                                Got 1tupni!
                                                                echo input2 | rev
                                                                
                                                                [grahamc@Morbo:~/projects/student-programs]$ echo input2 | rev
                                                                2tupni
                                                                Got 2tupni!
                                                                echo input3 | rev
                                                                
                                                                [grahamc@Morbo:~/projects/student-programs]$ echo input3 | rev
                                                                3tupni
                                                                
                                                                [grahamc@Morbo:~/projects/student-programs]$ didn't get 4tupni soon enough...
                                                                Morbo> echo $?
                                                                1
                                                                
                                                                1. 3

                                                                  Whoa, that’s nuts. Good to know for the future, definitely!

                                                                  1. 2

                                                                    lots of languages have “expect” libraries, I had good results with a python one.

                                                            1. 6

                                                              Something I hope to be covered is text reflowing, where you can resize your terminal and have the text flow to the new size. I’ve found it difficult to find a minimal terminal like Terminator which also supports this feature.

                                                              Something I can’t ever shake the feeling of, is that iTerm2 for macOS is the best terminal emulator, and consistently innovates and pushes the boundaries on what a terminal emulator can do … but without feeling bloated.

                                                              1. 4

                                                                FYI, terminator isn’t really “minimal.” In interface, sure, but it uses the heavy/featureful vte, notably used in gnome-terminal.

                                                              1. [Comment removed by author]

                                                                1. 7

                                                                  First NixOS on prgmr, then Lobste.rs on NixOS! :)

                                                                  1. 4

                                                                    I haven’t received any reports of users running NixOS, but typically folks would only reach out to me i they were having a problem. You can certainly boot up a live rescue and run an install over the serial console. Depending on the distribution this either ‘just works’ or requires it be told the console is on the serial port.

                                                                    1. 4

                                                                      NixOS ISOs from their website do not enable the serial console by default, but building a custom ISO which does is easy enough. I did so a few days ago on Debian using nix to create an NixOS installer for my APU2:

                                                                      git clone --branch 18.03 --depth=1 https://github.com/NixOS/nixpkgs.git nixpgs
                                                                      cat > serial-iso.nix <<EOF
                                                                      {config, pkgs, ...}:
                                                                      {
                                                                        imports = [
                                                                          <nixpkgs/nixos/modules/installer/cd-dvd/installation-cd-minimal.nix>
                                                                          <nixpkgs/nixos/modules/installer/cd-dvd/channel.nix>
                                                                        ];
                                                                        boot.kernelParams = [ "console=ttyS0,115200n8" ];
                                                                      }
                                                                      EOF
                                                                      nix-build -A config.system.build.isoImage -I nixos-config=serial-iso.nix nixpkgs/nixos/default.nix
                                                                      
                                                                    2. 1

                                                                      I actually tried a few months ago, but gave up because I had thought I figured out it was impossible. Although, seeing the link that @alynpost just posted, I might give it another go when I have some free time.

                                                                      1. 6

                                                                        Some years ago, a friend taught me a simple trick which I used twice to install OpenBSD at providers where neither OpenBSD nor custom ISOs were directly supported: We would build or download a statically linked build of Qemu, boot the VPS into its rescue image and start Qemu with the actual hard disk of the VPS as disk and an ISO to boot from. Thats not too hard and works for pretty much everything where you got a rescue system with internet access. I guess it should work for NixOS too and maybe nix could even be used for the qemu build ;)

                                                                        1. 3

                                                                          If you want to give it a go and get stuck write support@prgmr.com and we’ll help you debug.

                                                                          1. 2

                                                                            Thanks! I really should have been less lazy and just asked for help last time.

                                                                      1. 26

                                                                        If you want to impress me, set up a system at your company that will reimage a box within 48 hours of someone logging in as root and/or doing something privileged with sudo (or its local equivalent). If you can do that and make it stick, it will keep randos from leaving experiments on boxes which persist for months (or years…) and make things unnecessarily interesting for others down the road.

                                                                        Man, yes. At a previous company I setup the whole company using an immutable deployments. Part of this was you could still log in and change stuff, but it marked the box as “tainted” and would terminate and replace it after 24hrs. This let you log in, fix a breaking and go back to bed … but made sure the “port it back to the config management tool” a #1 task for the morning.

                                                                        A second policy was no machine existed for more than 90 days.

                                                                        These two policies instilled in us a hard-lined attitude of “if it isn’t managed, it isn’t real” and was resoundingly successful in pushing us to solid deployment mechanisms which worked and survived instances being replaced regularly.

                                                                        I can’t recommend this approach enough. Thank you Rachel, for writing about this.

                                                                        1. 8

                                                                          A second policy was no machine existed for more than 90 days.

                                                                          I’m curious how you managed the stateful machines (assuming you had some). I’m a DBA, and, well, I often find myself pointing out to our sysads that stateful stuff is just harder to manage (and maintain uptime) than stateless stuff. Did you just exercise the failover mechanism automatically? How did that work downstream?

                                                                          1. 7

                                                                            Great catch! Our MySQL database cluster was excluded from the rule because of the inherent challenges of making that work, however our caching and ElasticSearch clusters were not. Caching because it is a cache, ElasticSearch because its replication and failure handling is batteries-included. Note this was with a not enormous amount of data, if our data grew to $lots we would likely stop giving ES the same treatment.

                                                                            We worked hard to architect our systems in such a way that data was not on random machines, but in very specific places.

                                                                            1. 5

                                                                              Ah, good, okay. That makes more sense.

                                                                              Currently we’re in a private cloud, so nothing’s batteries-included. Plus we’re using a virtual storage system in a way that would make traditional replica/failover structures too expensive. The result is our production DB VMs go for a very long time between reboots, let alone rebuilds.

                                                                              I agree, though, that isolation is a great way to limit that impact. Combine that with some decent data-purpose division (e.g. move the sessions out of the DB into a redis store that can be rebuilt, move the reporting data to a separate DB so we can flip between replicas during reboots, etc), and you can really cut down on the SPOFs.

                                                                          2. 1

                                                                            I’ve been in 2 different orgs where they reimaged the machine as soon as each user logged out!

                                                                            1. 1

                                                                              Aggressive! I wonder if there were escape hatches for emergencies?

                                                                              1. 1

                                                                                What sort of emergencies are you envisioning?

                                                                          1. 7

                                                                            I use NixOS as my main OS at home and I think it’s the best thing I’ve done for having a stable system. I was mucking around with some boot deps and messed something up and all I had to do to get back to a working system was choose one option up at the grub menu and I booted into my system as it was before I had made the change.

                                                                            However there’s a few things that I wish were better:

                                                                            You pretty much need to put your nixos config into version control. While you can revert to a previous version of your system, it doesn’t actually save the previous version of your config, you need to manually revert before making any changes.

                                                                            While versions of things are tracked explicitly and you can have multiple versions installed, nixpkgs generally doesn’t have multiple versions available to install/depend on (with the obvious exceptions of big things like py2/3). This means if you need a newer version of something and want to contribute back you have to update everything else that depends on your package’s (ie derivation’s) dependencies. That’s a pain. It also means that you can’t installed old versions of things along side new versions of things.

                                                                            There’s also a lot to learn if you need something that’s not packaged already because there’s no way to run binaries not built explicitly for nixos. There’s not even any way to run flatpacks, snaps, or any of the others, but looking at nixpkgs there are people working on trying to make those work.

                                                                            All that said, it’s still a better experience that any other distro I’ve used in the past. And I’ve never even tried to contribute to packages on any previous distro, so I’m not sure if it’s easier this way, but it’s a hell of a lot less intimidating for sure.

                                                                            Also, I’m by no means an expert, take what I’ve said with a grain of salt, I’m sure there’s bound to be at least one thing I’ve said above that’s wrong just due to my inexperience.

                                                                            And again, that’s mostly about NixOS, and not just Nix. I’m actually in the process of moving all the things I’ve installed via homebrew on my work laptop over to Nix after homebrew broke my system (twice) yet again when they mucked with the python 2/3 naming. I’m tired of dealing with it and have yet to have a serious issue with Nix on OSX. So, I can wholeheartedly suggest to everyone here to start playing around with Nix on an existing Linux or OSX system.

                                                                            1. 5

                                                                              It also means that you can’t installed old versions of things along side new versions of things.

                                                                              Nothing prevents you from using different revisions of nixpkgs in different places, which would allow you to achieve this.

                                                                              There’s also a lot to learn if you need something that’s not packaged already because there’s no way to run binaries not built explicitly for nixos.

                                                                              This is not true, Nix has a buildFHSUserEnv function that creates a linux chroot where you can pretend you’re running a regular linux distro. @puffnfresh has a good post on using this here.

                                                                              1. 5

                                                                                Nothing prevents you from using different revisions of nixpkgs in different places, which would allow you to achieve this.

                                                                                Huh, I can’t believe I never thought of that. I’ve even installed things from a local “fork” of nixpkgs and it never occurred to me that’s exactly what I was doing.

                                                                                This is not true, Nix has a buildFHSUserEnv function that creates a linux chroot where you can pretend you’re running a regular linux distro. @puffnfresh has a good post on using this here.

                                                                                That’s true. I guess I was inexact in what I wrote. You might not need to “package” something (as in contributing it to nixpkgs), but you still need to know enough about how things are “packaged” (as in writing any kind of derivation) so you can write a .nix file that wraps it in something that allows it to work. I really wish there was a pretend-to-not-be-nix ./rando-bin command that would handle 99% of binaries for when I just need to get something done. (Although I realize that’s asking a heck of a lot.)

                                                                                Edit: Huh. I think that’s what you just linked. I should have read that all the way though before replying. Man, I’ve been looking for something like that for ages. I should complain about things on the internet more often.

                                                                                1. 4

                                                                                  At work we use a few different versions of nixpkgs. We want an old version of Docker, for example. So we import the exact commit of nixpkgs we want.

                                                                                  pretend-to-not-be-nix ./rando-bin
                                                                                  

                                                                                  Is exactly what you get when using buildFHSUserEnv.

                                                                                  1. 2

                                                                                    I really wish there was a pretend-to-not-be-nix ./rando-bin command that would handle 99% of binaries for when I just need to get something done.

                                                                                    I think a lot of people find steam-run provides it this command in most cases!

                                                                              1. 3

                                                                                This is great, but I’m not sure what problem they are addressing. My main problem with VPN services isn’t that I’d have to trust their software, because I’m not the only one running it. I have to trust their networks, their operators, their everything.

                                                                                This might be an unpopular opinion, but I think I’m better off with HTTPS Everywhere (and Tor, when I want to be really anonymous).

                                                                                1. 1

                                                                                  and Tor, when I want to be really anonymous

                                                                                  Of course that isn’t even a very good option unless you have extraordinary opsec hygiene.

                                                                                  1. 1

                                                                                    I’d say it’s relatively easy, depending on who you want to be anonymous to.

                                                                                    But for a more general audience, I recommend checking the Tor documentation about the protection they provide. They also have great illustrations of how and where to expect privacy from whom. Also, use the Tor Browser Bundle. Other browsers will betray you :)

                                                                                  2. 1

                                                                                    I think a lot of their customers just don’t want to receive rude letters in the mail from their ISPs. I can attest that this service prevents such letters. …Assuming you remember to turn the VPN on, or use a VM/dedicated machine that always/only has it on.

                                                                                  1. 13

                                                                                    Mmmmh, an anonymous domain registration, an unknown “CTS” security research firm publishing only one whitepaper for all vulnerabilities. Whitepaper published on a secondary website “safefirmware.com”, that is otherwise broken.

                                                                                    No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.

                                                                                    This smells like FUD. The SP is probably broken and vulnerable, yes. But this crap seems only aimed at selling security services.

                                                                                    1. 4

                                                                                      How does “responsible disclosure” verify findings?

                                                                                      1. 1

                                                                                        My phrasing was a bit misleading, but the whole “exploit being published, peer review, responsible disclosure” was what I was getting at to verify the findings. These publications have to be transparent, reproducible and verified by third parties to be taken seriously.

                                                                                      2. 6

                                                                                        No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.

                                                                                        This is bullshit. Here’s peer review.

                                                                                        I’m astounded at just how strong the backlash against this is, and the backlash reeks of damage control propaganda.

                                                                                        AMD PSP is a hardware backdoor. Intel ME is a hardware backdoor. These things shouldn’t exist in the first place, and I wouldn’t put it past AMD and Intel to spend $$ sending armies of trolls trying to cover up the severity of what they’ve done.

                                                                                        1. 0

                                                                                          Of course AMD PSP shouldn’t exist in the first place.

                                                                                          But the backlash against this is simply due to “it” being a ridiculous hit-job. I don’t care about damage to AMD.

                                                                                          This is bullshit. Here’s peer review.

                                                                                          Nice, they did not link it on their website. My first guess will always be that there is none unless shown otherwise.

                                                                                        2. 2

                                                                                          Seems to be the consensus about this site on Reddit, HN, etc. Someone’s either trying to make a name for themselves or Intel paid someone who paid someone who paid someone who is good at marketing.

                                                                                          1. 1

                                                                                            and a big connection to

                                                                                            the Israeli Intelligence Corps Unit 8200