1. 1

    Thanks for sharing your creation with the world! I get sad a little whenever I see someone write a language interpreter/compiler in $NOT_HASKELL. Don’t get me wrong, I’m not implying your decision wasn’t the best one under the circumstances, I’m just curious whether you’ve considered doing that.

    1. 1

      The grammar of the language is defined in EBNF / Tatsu in python https://github.com/endgameinc/eql/blob/900a25e7e8721292be61e11352efb5329d399b53/eql/etc/eql.ebnf and in spite of what has been released we’ve also implemented it in a couple other languages internally. Neither of them Haskell as we don’t use that at all internally, but I think we’ve talked about doing so in OCaml.

      The language is relatively simple and even the extensions w/ functions etc don’t lock it to any particular PL stack, nor would they prevent you from compiling EQL statements into little programs “straight”. The heavy lifting around EQL has to do with making it compatible with data formats and schemas from other security tools, IE how do security events from windows/linux/mac compare, etc.

      Ideally this query language will have other implementations. At its heart it’s just a way of ingesting events and selecting those that match patterns either within single events or in chains of interrelated events. None of that is wedded to python or anything else.

    1. 2

      Give Rich a piece of his own medicine

      1. 2

        This is pretty neat, but in what contexts is it valuable to apply dropWithEnd to an infinite list?

        1. 1

          Well it wouldn’t be useful to apply it to a list that you know to be infinite. The utility here is that it lets you stay tolerant to infinite lists while getting what you want from finite ones.

          “Infinite list” is also a code word for “a very large list”, in that an algorithm that works for an infinite list will also avoid traversing a very large list unnecessarily, so it’s a good sign to be able to work on infinite lists.

          1. 1

            Sure, I guess what I should have asked is why you need to do that? It seems suspiciously like papering over buggy code.

            Infinite lists rub me the wrong way. They’re convenient, and there’s no way to avoid them with how haskell handles laziness. But they make a lot of functions unsafe.

            1. 1

              IMHO, infinite lists aren’t conceptually different from lists that are so large that anything that relies on the list’s finiteness makes the program wrong. People are worried that length is partial in the presence of infinite lists, but isn’t length just as bad in a strict language if the singly-linked list you’re feeding to it is very large? That’s why I’m not worried about the lack of finite/infinite distinction, the problem is there in any case.

              EDIT: BTW, why do you think the code should be buggy? It’s very idiomatic Haskell to write

              takeAsManyAsYouNeed $ calculateAnPossiblyInfiniteSequenceOfThings
              

              For instance you could be implementing a scheduler that might have periodic tasks, so the list everythingThatNeedsToBeDone can be an infinite list, and you can filterRelevantTasks and takeing from that list a number of tasks that you’re willing to start “now” is very clean.

              1. 2

                In a strict language, you basically cannot create a list so big you can’t read it, because both the cost of creating the list and reading its length are proportional to that length. You can certainly cause long application pauses with an overly long list, but not non-termination. So I do think there’s a pretty big conceptual difference.

                I think the code is buggy because you’re trying to drop the end of an infinite list, and it’s a no-op. Presumably you wrote that code for some purpose that it likely can’t achieve. I’m willing to acknowledge that there might be a case where you’ve carefully arranged things so that you drop the end of a finite list or do a no-op on an infinite list, and both are correct, but that seems contrived.

                I’m not against the idea of infinite sequences, or non-termination (every web server needs non-termination of some kind). I’m just suspicious of bundling that into the same type that we use for finite lists.

                P.S. I think your point that not-traversing a long list is valuable is entirely correct.

        1. 4

          Python REPL is also pretty good but it is a code snippet REPL and not a real system REPL like Lisp REPLs.

          Is there any concrete difference behind this statement? Like such and such is not possible in iPython, but is supported in the LISP REPL.

          1. 3

            So in Lisp typically you have the process running and a REPL attached to it and can replace parts of it while it is running, e.g. implement a function without having to restart the process to reload the code. That’s not something that is typically done with Python.

            1. 1

              So it’s about tradition I guess? Because you could start your process in the REPL and do the same thing in Python as well, right? Though I understand that it’s more flexible to be able to attach to a process, since most projects involve a huge environment to run the process in, and you typically won’t be able to just run a function in the REPL and have it behave like your real system.

              1. 3

                I assume it is also about the runtime being able to do hot code swapping. I know this is pretty much what Erlang’s BEAM VM is designed for but reloading modules in Python does not work particularly reliably. And you can’t redefine functions that easily either.

                1. 2

                  It’s not just about tradition. CLOS (the Common Lisp Object System) has a whole set of functions to enable one to redefine classes, update instances of a class when their class changes &c. Python is much more static that way, as anyone who’s ever reloaded a file and discovered that his changes haven’t affected existing objects has discovered.

                  Regarding features not supporting by the Python REPL: the Python reader isn’t as extensible as the Lisp reader is (one can execute arbitrary Lisp code for any particular character using the read macro facility); the Python evaluator takes strings rather than expressions (this makes code generation far trickier); and the Python printer can’t print circular structure (unless something has changed). More generally, Python’s error-handling capabilities are much more primitive than Lisp’s, which means that it’s pretty normal to get into a bad, half-loaded state with Python but (almost) unheard of with Lisp.

            1. 13

              I read this post very different than a lot of people here. Many people here see a man who gave up improving himself or is simply a bad programmer who has a lot to learn. I looked at it from a different angle. He wrote…

              I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.

              I firmly believe our industry is mostly a pop culture and this is someone who is basically struggling to keep up with fashion. It’s not about being unable to keep up with the latest tech, it’s about not being able to keep up with fashion.

              1. 7

                Well, and just not wanting to keep up with the latest fashion. After awhile you just get tired of change for change’s sake.

                1. 4

                  I feel like i’m there….

                2. 3

                  It’s not really (just) fashion though. The reason the churn is increasing is because everything still sucks. Our tools are archaic, our ecosystems full of junk, and the world at large is screaming for more and more software (and programmers) all the time, with very little focus going into fixing the software and practices we use to build software.

                  1. 3

                    Very true. However , whether we built new software or fix existing software squarely falls in technical leadership. There is no ‘other’ driving these choices except us. Maybe we are our own fools.

                  2. 2

                    I think a part of being a good programmer is having the intuition to distinguish breakthroughs and fashion. I personally use mathematical foundations to guide my investments and it’s been paying well for me the last 12 years.

                    1. 2

                      I think having good math skills is essential, not because they are useful (though they are), but because math is a game and is proto-programming. I do think though that programming is a new kind of math, that’s ultimately harder in many ways.

                      1. 3

                        Programming is a branch of applied mathematics, but not the kind of mathematics most people ever see in high school or college. The closest most people get to abstract mathematics is set theory, which they last see in primary school.

                        However, what I meant was, when I see the next best thing, I judge by looking at its mathematical foundations whether it’s really good, or it’s just something someone with a lot of followers on Twitter happened to see on a good day.

                        1. 2

                          Haha, yes, I see what you mean. It’s actually amazing how many old math ideas are not used yet in software. I wonder if someone was looking to create a new product, just take some crazy math idea and try building a product around it.

                          1. 1

                            Sounds like a fun way to earn money :) But you also need wealthy people with a need for your solution.

                  1. 6

                    That the worst thing about hacks that somehow work isn’t the possibility that they might suddenly stop working, but the fact that you’ll have to mentally evaluate whether the hack is the reason every single time there appears to be a problem.

                    1. 21

                      So I think I’m a bit late for the big go and rust and garbage collection and borrow checker discussion, but it took me a while to digest, and came up with the following (personal) summary.

                      Determining when I’m done with a block of memory seems like something a computer could be good at. It’s fairly tedious and error prone to do by hand, but computers are good at monotonous stuff like that. Hence, garbage collection.

                      Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                      But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye.

                      1. 18

                        But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye

                        I’m in the middle of editing an essay on this! Long story short, proving an arbitrary code property is undecidable, and almost all the decidable cases are in EXPTIME or worse.

                        1. 10

                          I’m kinda familiar with undecidable problems, though with fading rigor these days, but the thing is, undecidable problems are undecidable for humans too. The impossible task becomes no less impossible by making me do it!

                          I realize it’s a pretty big ask, but the current state of the art seems to be redefine the problem, rewrite the program, find a way to make it “easy”. It feels like asking a lot from me.

                          1. 10

                            The problem is undecidable (or very expensive to decide) in the most general case; what Rust does is solve it in a more limited case. You just have to prove that your usage fits into this more limited case, hence the pain in the ass. Humans can solve more general cases of the problem than Rust can, because they have more information about the problem. Things like “I only ever call function B with inputs produced from function A, function A can only produce valid inputs, so function B doesn’t have to do any input validation”. Making these proofs without computer assistance is no less of a pain in the ass. (Good languages make it easy to enforce these proofs automatically at compile or run time, good optimizers remove redundant runtime checks.)

                            Even garbage collectors do this; their safety guarantees are a subset of what a perfect solution would provide.

                            1. 3

                              “Humans have more information about the problem”

                              And this is why a conservative borrower checker is ultimately the best. It can be super optimal, and not step on your toes. It’s up to the human to adjust the lifetime of memory because only the human knows what it wants.

                              I AM NOT A ROBOT BEEP BOOP

                            2. 3

                              Humans have a huge advantage over the compiler here though. If they can’t figure out whether a program works or not, they can change it (with the understanding gained by thinking about it) until they are sure it does. The compiler can’t (or shouldn’t) go making large architectural changes to your code. If the compiler tried it’s hardest to be as smart as possible about memory, the result would be that when it says “I give up, the code needs to change” the human who can change the code is going to have a very hard time understanding why and what they need to change (since they haven’t been thinking about the problem).

                              Instead, what Rust does is apply as intelligent a set of rules they could that produce consistent understandable results for the human. So the compiler can say “I give up, here’s why”. And the human can say “I know how the compiler will work, it will accept this this time” instead of flailing about trying to convince the compiler it works.

                              1. 1

                                I realize it’s a pretty big ask

                                I’ve been hearing this phrase lately “big ask” from business people generally, seems very odd to me. Is it new or have I just missed it up to now?

                                1. 2

                                  I’ve been hearing it from “business people” for a couple years at least, I assume it’s just diffusing out slowly to the rest of society.

                                  The new one I’m hearing along these lines is “learnings”. I think people just think it makes them sound smart if they use different words.

                                  1. 1

                                    A “learning”, as a noun, is attested at least as far back as the early 1900s, FYI.

                                    1. 0

                                      This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun. You can argue about the merits of prescriptivism all you like, you can have whatever philosophical discussion you like as to whether it’s valid to say that something is ‘incorrect English’, but ‘someone used it in that way X hundred years ago’ does not justify anything.

                                      1. 2

                                        This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun.

                                        It wasn’t “one person using it incorrectly” that’s not even remotely how attestation works in linguistics. And of course, of course it is very much a noun. What precisely, man, do you think a gerund is? We have learning curves, learning processes, learning centres. We quote Pope to one another when we say that “a little learning is a dangerous thing”.

                                        To take the position that gerunds aren’t nouns and cannot be pluralized requires objecting to such fluent Englishisms as “the paintings on the wall”, “partings are such sweet sorrow”, “I’ve had three helpings of soup”

                                        1. 0

                                          ‘Painting’ is the process of painting. You can’t pluralise it. It’s also a (true) noun, the product of doing some painting. There it obviously can be pluralised. But ‘the paintings we did of the house kept improving the sheen of the walls’ is not valid English. They’re different words.

                                          1. 2

                                            LMAO man, how do you think Painting became a “true” noun? It’s just a gerund being used as a noun that you’re accustomed to. One painted portraits, landscapes, still lifes, studies, etc. To group all these things together as “paintings” was an instance of the exact same linguistic phenomenon that gives us the idea that one learns learnings.

                                            You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                            1. 0

                                              You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                              No, I’m not. This has literally nothing to do with linguistics. That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are. People using language wrong are using it wrong in the eyes of others, which makes it wrong.

                                              1. 1

                                                That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are.

                                                Well, I hate to break it to you, but plenty of people say learnings is a word, like all of the people you were complaining use it as a word.

                                                1. 0

                                                  There are lots of people that write ‘should of’ when they mean ‘should’ve’. That doesn’t make them rightt.

                                                  1. 1

                                                    Yes and OK is an acronym for Oll Korrect, anyone using it as a phrase is not OK.

                                                    1. 0

                                                      OK has unknown etymology. And acronyms are in no way comparable to simply incorrect grammar.

                                                      1. 1

                                                        Actually it is known. Most etymologists agree that it came from Boston in 1839 originating in a satirical piece on grammar. This was responding to people who insist that English must follow some strict unwavering set of laws as though it were a kind of formal language. OK is an acronym, and it stands for Oll Korrect, and it was literally invented to make pedants upset. Certain people were debating the use of acronyms in common speech, and to lay it on extra thick the author purposefully misspelled All Correct. The word was quickly adopted because pedantry is pretty unpopular.

                                                        1. 1

                                                          What I said is that there is what is accepted as valid and what is not. Nobody educated thinks that ‘should of’ is valid. It’s a misspelling of ‘should’ve’. Nobody thinks ‘shuold’ is a valid spelling of ‘should’ either. Is this really a debate you want to have?

                                                          1. 1

                                                            I was (mostly) trying to be playful while also trying to encourage you to be a little less litigious about how people shuold and shuold not use words.

                                                            Genuinely sorry for making you actually upset though, I was just trying to poke fun a little for getting a bit too serious at someone over smol beans, and I was not trying to make you viscerally angry.

                                                            I also resent the attitude that someone’s grammatical or vocabulary knowledge of English represents an “education”.

                                  2. 1

                                    It seems like in the last 3 years all the execs at my company started phrasing everything as “The ask is…” I think they are trying to highlight that you have input (you can answer an ask with no) vs an order.

                                    In practice, of course, many “asks” are orders.

                                    1. 4

                                      Sure, but we already have a word for that, it’s “request”.

                                      1. 4

                                        Sure, but the Great Nouning of Verbs in English has been an ongoing process for ages and continues apace. “An ask” is just a more recent product of the process that’s given us a poker player’s “tells”, a corporation’s “yearly spend”, and the “disconnect” between two parties’ understandings.

                                        All of those nouned verbs have or had perfectly good non-nominalized verb nouns, at one point or another in history.

                                        1. 1

                                          One that really upsets a friend of mine is using ‘invite’ as a noun.

                                    2. 1

                                      Newly popular? MW quotes this usage and says Britishism.

                                      https://www.merriam-webster.com/dictionary/ask

                                      They don’t date the sample, but I found it’s from a 2008 movie review.

                                      https://www.spectator.co.uk/2008/10/cold-comfort/

                                      So at least that old.

                                  3. 3

                                    You no doubt know this, but the undecidable stuff mostly becomes decidable if you’re willing to accept a finite limit on addressable memory, which anyone compiling for, say, x86 or x86_64 is already willing to do. So imo it’s the intractability rather than undecidability that’s the real problem.

                                    1. 1

                                      It becomes decidable by giving us an upper bound on the number of steps the program can take, so should require us to calculate the LBA equivalent of a very large BB. I’d call that “effectively” undecidable, which seems like it would be “worse” than intractable.

                                      1. 2

                                        I agree it’s, let’s say, “very” intractable to make the most general use of a memory bound to verify program properties. But the reason it doesn’t seem like a purely pedantic distinction to me is that once you make a restriction like “64-bit pointers”, you do open up a bunch of techniques for finite solving, some of which are actually usable in practice to prove properties that would be undecidable without the finite-pointer restriction. If you just applied Rice’s theorem and called verifying those properties undecidable, it would skip over the whole class of things that can be decided by a modern SMT solver in the 32-bit/64-bit case. Granted, most still can’t be, but that’s why the boundary that interests me more nowadays is the “SMT can solve this” vs. “SMT can’t solve this” one rather than the CS-theory sense of decidable/undecidable.

                                  4. 6

                                    Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done.

                                    It’s really hard. The main tool for that is separation logic. Manually doing it is harder than borrow-checking stuff. There are people developing solvers to automate such analyses. Example. It’s possible what you want will come out of that. I think there will still be restrictions on coding style to ease analyses.

                                    1. 3

                                      In my experience, automated proof generators are very leaky abstractions. You have to know their search methods in detail, and present your hypotheses in a favorable way for those methods. It can look very clean, but it can mean that seemingly easy changes turn out to be frustrated by the methods’ limitations.

                                      1. 4

                                        I’m totally with you on this. Rust very much feels like an intermediate step and I don’t know why they didn’t take it to it’s not-necessarily-obvious conclusion.

                                        1. 5

                                          In my personal opinion, it might be just that we’re happy that we can actually get to this intermediate point (of Rust) reliably enough, but have no idea yet how to get to the further point (conclusion). So they took it where they could, and left the subsequent part as an excercise for the reader… I mean, to be explored by future generations of programmers, hopefully.

                                          1. 4

                                            We have the technology, sort of. Total program analysis is really expensive though, and the workflow is still “edit some code” -> “compile on a laptop” -> repeat. Maybe if we built a gc’ed language that had a mode where you push your program to a long running job on a compute cluster to figure out all the memory proofs.

                                            This would be especially cool if incrementals could be cached.

                                            1. 4

                                              I’ve recommended that before. There’s millions being invested into SMT/SAT solvers for common bugs that might make that happen, too. Gotta wait for the tooling to catch up. My interim recommendation was a low-false-positive, static-analysis tool like RV-Match to be used on everything in the fast path. Anything that passes is done no GC. Anything that hangs or fails is GC’d. Same with automated proofs to eliminate safety checks. If it passes, remove that check if that’s what pass allows. If it fails, maybe it’s safe or maybe tool is too dumb. Keep the check. Might not even need cluster given number of cores in workstations/servers and efficiency improvements in tools.

                                            2. 4

                                              I think it’s because there’s essentially no chance that a random piece of code will be provable in such a way. Rust encourages, actually to the point of forcing, the programmer to reason about lifetimes and ownership along with other aspects of the type as they’re constructing the program.

                                              I think there may be a long term evolution as tools get better: the languages checks the proofs (which, in my dream, can be both types and more advanced proofs, say that unsafe blocks actually respect safety), and IDE’s provide lots of help in producing them.

                                              1. 2

                                                there’s essentially no chance that a random piece of code will be provable in such a way

                                                There must be some chance; rust is already proving memory safety.

                                                Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                1. 17

                                                  Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                  This is a misconception. The Rust compiler does not see anything beyond the function boundary. That makes lifetime checking efficient. Basically, when compiling a function, the compiler makes an reasonable assumption about how input and output references are connected (the assumption is “they are connected”, also known as “lifetime elision”). This is an assumption communicated the outside world. If this assumption is wrong, you need to annotate lifetimes.

                                                  When compiling, the compiler will check if the assumption holds for the function body. So, for every function call, it will check if the the signature holds (lifetimes are part of the function signature).

                                                  Note that functions with different lifetime annotations taking the same data might differ in their behaviour. It also isn’t always obvious to the compiler whether you want references to be bound together or not and that situation might be ambigous.

                                                  The benefit of this model is that functions only need to be rechecked/compiled when they actually change, not some other code somewhere else in the program. It’s very predictable and errors are local to the function.

                                                  1. 2

                                                    I’ve been waiting for you @skade.

                                                    1. 2

                                                      Note that functions with different lifetime annotations taking the same data might differ in their behaviour.

                                                      I wrote this late at night and have some errata here: they might differ in their behaviour wrt. lifetime checking. Lifetimes have no impact on the runtime, an annotation might only prove something safe that the compiler previously didn’t see as safe.

                                                    2. 4

                                                      Maybe I’m misunderstanding. I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added. (In the context of Rust, I would consider “annotation” to include choosing between &, &mut, and by-move, as well as adding .clone() when needed, especially for refcount types, and of course adding explicit lifetimes in cases that go beyond the present lifetime elision rules, which are actually pretty good). My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious. There’s a lot of experience around this for analyses where the consequence of failure is performance loss due to not being able to do an optimization, or false positives in static analysis tools.

                                                      The main point I’m making here is that, by requiring the programmer to actually provide the types, there’s more work, but the failures are a lot less mysterious. Overall I think that’s a good tradeoff, especially with the present state of analysis tools.

                                                      1. 1

                                                        I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added.

                                                        I’ll agree with that definition

                                                        My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious.

                                                        This is where I feel we disagree. I feel like you’re assuming that if we make lifetimes optional that we would for some reason also lose the type system. That was not my assumption at all. I assumed the programmer would still pick their own types. With that in mind, If this theoretical compiler could prove memory safety using the developer provided types and the inferred ownership, why would it still fail a lot?

                                                        where the consequence of failure is performance loss due to not being able to do an optimization

                                                        That’s totally understandable. I assume like any compiler, it would eventually get better at this. I also assume lifetimes become an optional piece of the program as well. Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                        1. 3

                                                          Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                          That’s what Rust does. And many improvements to Rust focus on increasing the number of lifetime patterns the compiler can recognize and handle automatically.

                                                          You don’t have to annotate everything for the compiler. You write code in patterns the compiler understands, and annotate things it doesn’t. So Rust has gotten easier and easier to write as the compiler gets smarter and smarter. It requires fewer and fewer annotations / unsafe blocks / etc as the compiler authors discover how to prove and compile more things safely.

                                                      2. 4

                                                        Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                        I wondered this at first, but inferring the lifetimes (among other issues) has some funky consequences w.r.t. encapsulation. Typically we expect a call to a function to continue to compile as long as the function signature remains unchanged, but if we infer the lifetimes instead of making them an explicit part of the signature, subtle changes to a function’s implementation can lead to new lifetime restrictions being inferred, which will compile fine for you but invisibly break all of your downstream callers.

                                                        When the lifetimes are an explicit part of the function signature, the compiler stops you from compiling until you either fix your implementation to conform to your public lifetime contract, or change your declared lifetimes (and, presumably, since you’ve been made conscious of the breakage in this scenario, notify your downstream and bump your semver).

                                                        It’s basically the same reason that you don’t want to infer the types of function arguments from how they’re used inside a function – making it easy for you to invisibly breaking your contract with the outside world is bad.

                                                        1. 3

                                                          I think this is the most important point here. Types are contracts, and contracts can specify far more than just int vs string. Complexity, linearity, parametricity, side-effects, etc. are all a part of the contract and the more of it we can get the compiler to enforce the better.

                                                  2. 1

                                                    Which is fine, until you have time or memory constraints that are not easily met by the tracing GC, which is all software of sufficient scale or complexity. At that point, you end up with half-assed and painful to debug/optimize manual memory management in the form of pools, ect.

                                                    1. 1

                                                      Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                      Oh I wish that were how Rust worked. But it isn’t. A variant of Rust where you could actually prove things about your programme would be wonderful. Unfortunately, in Rust, you instead just have ‘unsafe’, which means ‘trust me’.

                                                    1. 3

                                                      Points do not mean value of course, and they are not intended as a measure of individual developer speed, because scrum is about team capacity. Treating it in this way creates perverse incentives.

                                                      Probably you’re helping people “too much” and not getting enough help yourself. Pair, and get people to come help you get over any humps. That will help transfer points from them to you.

                                                      Secondly, try closing out tickets when the bare minimum of functionality is met (tdd will help here). If you can still earn points for bugs (which you should given that it’s being treated as an individual metric) then you can pick up points for fixing unfinished functionality.

                                                      Thirdly, look at where you’re burning time in your dev cycle: keep a detailed log of the time you spend. This will help you know what to optimize (which will frequently mean getting other people to contribute, or omitting much of that activity).

                                                      Finally, try to increase the points allocated to stories you are likely to work on. Do this both by picking those stories out during estimation and arguing for higher point values; but also by picking stories that you think are oversized.

                                                      1. 1

                                                        Treating it in this way creates perverse incentives.

                                                        It is always treated this way. Numbers are like sirens to the mind. We can’t help adding them and averaging them and doing all kinds of stupid stuff with them. That’s why I think we should avoid numbers while estimating things. If you think it’s hard, say it’s hard, luckily, the average of hard and easy is undefined. Whatever value you hoped to gain from numbers’ arithmetic properties was a false promise to begin with.

                                                        1. 1

                                                          No it really isn’t always treated this way. The only time I’ve ever used it this way was to try to quantify and capture the gap between good and underperforming team members. It was appropriate in that case because they weren’t creating value in other ways.

                                                          If personal point velocity has been used the same way OP’s company does everywhere you have worked, then that’s a huge problem and I would love to talk more to your management about it.

                                                      1. 2

                                                        I wonder how they explain these moves to their shareholders. They won’t want to hear things like “we care about developers” as an explanation of billion dollars gone.

                                                        1. 2

                                                          Perhaps unsurprisingly, perl has both lexical (“my”) and dynamically (“local”) scoped variables.

                                                          Also, no need for an imaginary language, perl also implements the “dynamic control flow” scoping, which is - frankly - my least favourite part of the language:

                                                          $ cat tt.pl
                                                          #!/usr/bin/perl
                                                          use strict;
                                                          
                                                          foreach my $i (1..10) {
                                                            doit($i)
                                                          }
                                                          
                                                          sub doit {
                                                            my ($i) = @_;
                                                            next if $i % 3 == 0;
                                                            print "$i\n";
                                                          }
                                                          $ ./tt.pl
                                                          1
                                                          2
                                                          4
                                                          5
                                                          7
                                                          8
                                                          10
                                                          

                                                          The dynamically scoped variables are a useful tool to have in the toolbox, but the above code is frankly a footgun with no reasonable upside I can see.

                                                          1. 2

                                                            but the above code is frankly a footgun with no reasonable upside I can see.

                                                            I believe the argument is: why should you not be able to abstract parts of one function into another simply because they contain control flow?

                                                            1. 1

                                                              Makes sense. You can easily get this behavior in Haskell if you want it, since side effectful computations like control flow are just values (with a type that is usually a M-word) so you can manipulate them, pass them around and return them from functions.

                                                          1. 9

                                                            My personal preference is to model as much of the requirements as feasible with types and write tests for the rest. In a language like Haskell, with a very powerful type system, this leaves little room for unit tests and somewhat more room for integration tests. You still need your traditional end to end system tests as well, of course, which fall outside the reach of your type system.

                                                            1. 4

                                                              You would like Idris, which takes this to an extreme. You can encode state machines, concurrent protocols, and much more in types, which looks like a whole new type of “metaprogramming”, and the choices it gives you are amazing.

                                                              1. 6

                                                                Idris is a great language, but it’s clearly not production ready. I can’t say I used any dependently typed language seriously, and I’m sure my opinion would change a lot if I did, but currently, I favor the “ghosts of departed proofs” kind of type level modeling, where you don’t prove your implementation internally, but you expose proof witnesses in the interface, so the users of your library can enjoy a very strongly typed interface.

                                                                This aligns very well with how I perceive types should be used; i.e. organize code such that entangled pieces of code relevant to a propertiy that is hard to prove live next to each other, and you can informally (i.e without relying on types) prove to yourself that they satisfy the property. Then expose those pieces with an interface that doesn’t allow (relying on types) the property to be violated by consumers.

                                                                1. 3

                                                                  you don’t prove your implementation internally, but you expose proof witnesses in the interface

                                                                  Can you point to some examples, please? I don’t really follow.

                                                                  1. 3

                                                                    Take a look at the justified-containers library. When you check whether a key is in a map, if the key actually is there, it gives you a type-level witness of that fact. Then when you lookup that key with that witness, you receive the value without a Maybe wrapping, because it’s proven already. However, the library uses fromJust internally (i.e doesn’t prove that fact to the compiler), because you can prove outside the type system that it’s impossible to receive a Nothing.

                                                                    1. 1

                                                                      Thanks

                                                                  2. 1

                                                                    but it’s clearly not production ready

                                                                    This sort of requires a qualifier. It’s probably not “introduce this to a company”-level production ready, but it certainly feels like it’s “start an open-source project”-level production-ready, which seems to be at least relevant in online discussions. It’s such a great language because it brings enormously powerful concepts from other languages like Agda and Coq, into an environment that basically looks like Haskell. I think any advanced Haskell programmer will be pleasantly surprised how these higher-level features that feel clunky and require extensions become trivially easy in Idris (although that’s just an intuition, I’ve never really dabbled in Haskell beyond trivial stuff and started using Idris directly).

                                                                    I can’t say I used any dependently typed language seriously, and I’m sure my opinion would change a lot if I did, but currently, I favor the “ghosts of departed proofs” kind of type level modeling

                                                                    It’s not just about writing explicit proofs in code. I mean in more advanced code it will pop up but being able to use expressions in types and types in expressions is extremely flexible. Look at this example of a concurrent interface for operations on list:

                                                                    ListType : ListAction -> Type
                                                                    ListType (Length xs) = Nat
                                                                    ListType (Append {elem} xs ys) = List elem
                                                                    

                                                                    How many languages allow you to express things on this level?:)

                                                                    Or look at this merge sort definition:

                                                                    mergeSort : Ord a => List a -> List a
                                                                    mergeSort input with (splitRec input)
                                                                      mergeSort [] | SplitRecNil = []
                                                                      mergeSort [x] | SplitRecOne = [x]
                                                                      mergeSort (lefts ++ rights) | (SplitRecPair lrec rrec) -- here
                                                                                = merge (mergeSort lefts | lrec)
                                                                                        (mergeSort rights | rrec)
                                                                    

                                                                    There you’ve used a view of the data-structure that is independent of its representation, specifically you viewed the list as a concatenation of two lists of equal length. A whole other axis to split your implementation over when it makes sense.

                                                              1. 9

                                                                Time for some nitpicking! You actually just need a Semigroup, you have no use for munit, and it’s pointless to append the list with munits, since mplus a munit = a by the monoid laws.

                                                                1. 4

                                                                  Your comment reminded me of Data.These: since we don’t pad with mempty values, then there a notion of “the zip of two lists will return partial values at some point”.

                                                                  And that led me to Data.Align, which has the exact function we are looking for:

                                                                  salign :: (Align f, Semigroup a) => f a -> f a -> f a 
                                                                  

                                                                  http://hackage.haskell.org/package/these-0.7.4/docs/Data-Align.html#v:salign

                                                                  (that weird notion was align :: f a -> f b -> f (These a b))

                                                                  1. 1

                                                                    Yeah this is exactly it. Good eye!

                                                                    1. 1

                                                                      It’s funny, because I think I poked the universe in a way that resulted in salign going into Data.Align; A year or so ago, someone mentioned malign in a reddit r/haskell thread, and I pointed out that malign only needed Semigroup and one of the participants in the thread opened an issue requesting a malign alternative with the Semigroup constraint.

                                                                      Now I feel like a Semigroup evangelist :)

                                                                  1. 7

                                                                    This is a complicated issue for me, one that I haven’t really worked out to my satisfaction. I’ll give it a shot here.

                                                                    My first reaction is send the developer a real monetary donation! Gold stars are for kindergarten, you can’t pay bills with stars.

                                                                    But then I don’t think open source is motivated by contractually stipulated monetary reward. It’s more of an artistic expression, a pride in workmanship. Yes it does offer professional exposure, but I don’t think the best and most prolific contributors are fixated on that. They think to themselves, “I’m making this software well and no short-term business objective is going to get in my way. Everyone will see this and be pleased.”

                                                                    Stars are thus saying, “You’ve made something beautiful and true.” It’s shared appreciation, online applause for a performance that has collectively elevated the audience and the performers.

                                                                    However, to continue the concert analogy, great performances do typically sell tickets. This is where open source doesn’t hold up. It’s as if they audience asks, “Can we get in for free if we just clap really loud?”

                                                                    I believe that existing web patronage models are a failure. Look at the average Patreon page – the scale of donations are like an alternate reality. Maintainers collecting like $100 per month total for stunning expertise that provides huge time savings for users worldwide. The fundamental problem with the Patreon model is that the developer has relinquished their leverage the moment they release code under an open license.

                                                                    If I put myself in the shoes of the would-be patrons for a moment, I can totally see their side. Maintainers and bloggers begging for money are ubiquitous, and their requests are vague. After all, they kind of started their projects for nothing and apparently that was good enough for them, so their plea rings hollow.

                                                                    I believe that the only effective model for being paid for open source maintenance is to stop work after a certain point and negotiate contracts with specific people or companies to work on specific features. The idea is that the initial work on a project (which brings it to popularity) is the investment that allows you both to create artistry and gain leverage for future consulting.

                                                                    This is still a second-class arrangement compared to businesses based on selling products or rentals because it cannot scale beyond paid labor. The consulting rate may be high, but if you stop working on the project that’s the end of your pay from that project. By contrast, authors who sell physical books or training videos make the artifacts once and then enjoy revenue proportional to number of people buying those artifacts.

                                                                    Would that I could truly internalize this capitalist mindset. There’s just something seductive about open source software – it feels like it’s the only thing that actually stays relevant in the long term. The commercial stuff gets warped and fades away. Freedom from commercial obligations and deadlines means that open source maintainers retain the independence to do things correctly.

                                                                    Developers working together on OSS form an intellectual bond that spans nations, almost like the scientific community. It’s the software created (or the scientific truths discovered) that unite people and elevate them beyond their otherwise monotonous striving for money and physical comforts.

                                                                    I’ll end this rant here. Perhaps I’ll never reconcile these two viewpoints, the material and spiritual you might call them.

                                                                    1. 2

                                                                      I rely on the Godot engine nowadays, but don’t really have money to spare. I’d love to contribute to their Patreon campaign, but there are specific features I’d need, and I don’t think they’re a priority. So it’s hard to direct money into specific problems. Bounty programs would be more specific.

                                                                      Everything about this is hard, though. Having money stuck in a bounty escrow is not advancing anything. Contract negotiation and international billing has a lot of overhead, and may turn out to not advance anything. Not paying anything, money or code, doesn’t necessarily advance the project.

                                                                      C’est la vie, I suppose.

                                                                      1. 2

                                                                        The money really has to come from businesses. It’s so easy to say “Hey, this JetBrains IDE I need costs $200” and that will get approved right away because it has to be paid for and it makes me much faster as a dev but saying “This open source library we use is asking for donations” will not get approved because it doesn’t have to be paid for. The most I can do for OSS we use at work is send bugfixes upstream.

                                                                        1. 1

                                                                          IMHO this is a very useful observation. Maybe we should build a culture that tolerates little paid gimmicks on top of open source projects so that you can justify what’s effectively a donation.

                                                                          1. 1

                                                                            This seems to be the way many OSS projects run now. The core is open source which usually has everything individuals need and then extras are proprietary which are needed for large corporate projects. Its called “Open Core” for people who want to search it. Gitlab even has the source for the paid features public but the license doesn’t let you use it without paying.

                                                                            It does have some issues though. The major one being what happens when someone replicates your paid features. Gitlab says they will accept pull requests that recreate their paid features but they also have the resources to create 20 more by next month. As a solo dev, having someone recreate your paid features could cut out all of your revenue.

                                                                            1. 1

                                                                              I think this gimmicks can very effectively be access oriented. Custom slack channel, custom email address, custom phone number, access to a special meeting. Not so much a feature they get over others, but access to the team they get over others.

                                                                            2. 1

                                                                              It might also not be paid for because the value isn’t as obvious. A bounty-style deal might get approved, because you’re essentially paying for something you require.

                                                                              It’s a question of the direction of the funds and value. This is very obscure when asking for donations in general, don’t you think?

                                                                          2. 2

                                                                            A lot of great stuff in this comment I want to reply to!

                                                                            My first reaction is send the developer a real monetary donation! Gold stars are for kindergarten, you can’t pay bills with stars.

                                                                            I actually think you hit the nail on the head with your first out of the gate recommendation. I think no matter how small your project is you should put up a Patreon or a Bountysource or similar. Not just for yourself for paying bills – but for the people who want to feel involved but can’t do so directly. The patreon model is about supporting what you love. Regardless of the platform you use, you can display you patreon count.

                                                                            The fundamental problem with the Patreon model is that the developer has relinquished their leverage the moment they release code under an open license.

                                                                            I fundamentally disagree with this. It simply isn’t about leverage. It is about eyeballs and goodwill. Look at DTNS – the show is free to listen to for all – heck, it is even free to remix how you want as it is released under creative commons. It brings in $18,000+ monthly because it feels good to support it and the perks it offers feel relevant.

                                                                            I think it is about being savvy in regards to perks, and initial market. Developers were not the initial target for Patreon, there isn’t a lot of crossover there. That said, I think many projects could have very successful Patreon setups if they tried. Some of it is about tiers, if you want to get into the Slack channel it takes a an investment. The investment could be time and code or documentation, or that investment can be $5 a month. If you want to sit in on the monthly feature discussion roundtable – $40 a month or direct contributions at a level that you are invited, etc. If you want to get the project leads home phone number, be a sustaining supporter at $1000 a month for at least 6 month – etc.

                                                                            After all, they kind of started their projects for nothing and apparently that was good enough for them, so their plea rings hollow.

                                                                            Which is why you put up the Patreon feature early, so it doesn’t look like some bolt on or beg later. It is there from before anyone would consider contributing. Neovim had this as a bolt on after the initial funding push, and while I find their pitch possibly too gentle, at least it is there at the bottom.

                                                                            to stop work after a certain point and negotiate contracts

                                                                            This is devastating to good will, and will encourage forks so someone can take your work and be the more well known version of it with those +3 features. I do not think this is a good way forward.

                                                                            (and one last mostly irrelevant reply)

                                                                            But then I don’t think open source is motivated by contractually stipulated monetary reward. It’s more of an artistic expression, a pride in workmanship.

                                                                            I honestly think far more work is done in anger than for artistry. In market terms, more “painkillers” than “vitamins”, doubly so in open source. “The bleep won’t beep bleep bleep what type of bleep bleep wrote this. I will just fix it, bleep it!”

                                                                            1. 2

                                                                              But then I don’t think open source is motivated by contractually stipulated monetary reward.

                                                                              I guarantee that is sometimes the case. I’ve been turned down offering to pay for work on FOSS projects or public sites specifically because the developers and admins wanted to keep money and its psychological effects out of that part of their work. They were doing it for ideology, charity, fun, and so on. They had other work they did for money. They kept them separate.

                                                                              I still advise offering to pay the going rate for work just in case they need it. If they refuse, maybe a cup of coffee or lunch for thanks. If they refuse that, then a heartfelt thanks, star, and whatever since they’re some really devoted people. I won’t say selfless since some do it for not-so-nice reasons and the good ones get personal satisfaction from the good they do. Definitely dedicated or devoted to helping others with their work without asking for something in return, though. I respect and appreciate those people. I also respect the ones doing it for money since they might have been doing something without large benefit or benefiting companies like Oracle hurting us all.

                                                                              EDIT: Someone might wonder what I meant by not taking money for not-so-nice reasons. I’ll give a few examples. One is academics who release code as a proof of concept and/or with potential to benefit people if someone else works on it. They’re paid and promoted for output of papers, not maintainable FOSS. Many would refuse offers to extend older projects. Proprietary software vendors doing OSS side-projects and/or open core companies might refuse paid work on their FOSS because the features compete with their commercial offerings. Regulated industries using some FOSS or OSS components that had to be certified in expensive process would have to recertify them to use modified forms. They often don’t do bug/security fixes for this reason. They might turn down offers on specific libraries. Finally, some people might want the software to work a specific way for arbitrary reasons and/or straight-up hate some potential contributors for personal attributes. There’s a religiously-motivated project whose maintainer fits that description.

                                                                              So, there’s some examples of maintainers that would turn down money for reasons having nothing to do with selflessness.

                                                                              1. 1

                                                                                It is basically same as science. Which is also tragically broken due to funding (and valuation) issues.

                                                                                If only there were a global fund for free projects that would map their dependency tree, perform some health-checking and distribute donations in a predictable fashion…

                                                                                Then a government or a company relying on free software might donate with indications on what to support. Gov. might, for example, donate 1% of purchased software project price to it’s dependencies or require the supplier to do so… That would be about €5.000.000 a year just for Czechia.

                                                                                1. 1

                                                                                  My first reaction is send the developer a real monetary donation! Gold stars are for kindergarten, you can’t pay bills with stars.

                                                                                  I’m afraid that for most people, the hurdle to sending money over the internet is much higher than telling them they like what is done via a star (or analogous system)…

                                                                                1. 5

                                                                                  Computer science clocksteps at the rate of algorithms and discoveries. Languages are always going to come and go, unless the language springs up from a good theory.

                                                                                  If you want to understand why this would be true, just look at the history of mathematics. Read about algebraic representations, which kind of abacuses have been used, slide rules, mechanical calculators. You will find out that what we have present today is a small fragment of what used to be, and the stuff that still exists was lugged to today because there’s not many obvious better ways to do the same thing.

                                                                                  By this basis, I’d propose that the current “top 20” by Redmonk cannot form any kind of a long-running status quo. It’s a large list of programming languages rooting to the same theory (Javascript, Java, Python, PHP, C#, C++, Ruby, C, Objective-C, Swift, Scala, Go, TypeScript, Perl, Lua).

                                                                                  There is going to be only one in 30 years, and I think it’ll be falling to C or Javascript axis. They are syntactically near and lot of software was and gets written with these languages. Although there is even more written with C++, it’s way too contrived to survive without reducing back to something like C.

                                                                                  CSS may have some chance of surviving, but it’s pretty much different from the rest. About Haskell I’m not sure. I think typed lambda calculus appear or will reappear in a better format elsewhere. The language will be similar to Haskell though, and may bear the same name.

                                                                                  Unix shell and its commands will probably survive, while Powershell and DOS will wither. Windows seems to have its days counted already by now. Sadly it was not because of open source movement. Microsoft again just botched themselves up.

                                                                                  R seems like a write-and-forget language. But it roots to Iverson’s notation.. Though perhaps the notation itself will be around, but not the current instances of it.

                                                                                  I think that hardware getting more concurrent and diverging from linear execution model will do permanent shakeup on this list in a short term. The plethora of programming languages that describe a rigid evaluation strategy will simply not survive. Though I have bit of bias to think this way so I may not be a reliable source for checking into the future.

                                                                                  But I think this be better than looking at programming language rankings.

                                                                                  1. 8

                                                                                    I think, most importantly, we haven’t even seen anything like the one language to rule them all. I expect that language to be in the direction of Conal Elliott’s work compiling to categories.

                                                                                    A language that is built around category theory from the start, like you have many different syntactic constructs and the ones you use in a given expression determines the properties of the category that the expression lives in. Such a language could locally have the properties of all the current languages and could provide optimal interoperation.

                                                                                    BTW, I think we won’t be calling the ultimate language a “programming language” because it’ll be as good for describing electrical circuits, mechanical designs and biological systems as for describing programs. So I guess it’ll be called something like a specification language.

                                                                                    1. 4

                                                                                      “we haven’t even seen anything like the one language to rule them all. “

                                                                                      That’s exactly what the LISPers always said they had. Their language could be extended to do anything. New paradigms and styles were regularly backported to it as libraries. It’s also used for hardware development and verification (ACL2).

                                                                                      1. 3

                                                                                        Well, it’s hard to say anything about LISPs in general since the span is so vast and academic, and especially for me, since my contact with any LISP is quite limited. But, from my understanding of the common usage of LISP, it doesn’t qualify.

                                                                                        First of all, I think dropping static analysis is cheating, but I don’t intend to tap into an eternal flame war here. What I mean when I say “the properties of the current languages” is no implicit allocations, borrow-checking and inline assembly like in Rust, purity and parametricity like in Haskell, capabilities-security like in Pony etc. etc. , and not only the semantics of these, but also compilers taking advantage of these semantics to provide static assistance and optimizations (like using the stack instead of the heap, laziness & strictness analysis etc.).

                                                                                        And I’m also not just talking about being able to embed these into a given language; you should also be able to write code such that if it’s simple enough, it should be usable in many of them. For instance, it’d be hard to come up with some language semantics in which the identity function cannot be defined, so the identifier id x = x should be usable under any local semantics (after all every category needs to have identity morphisms). You should also be able to write code that interfaces between these local semantics without leaving the language and the static analysis.

                                                                                        I know you can embed these things in LISP, expose enough structure from your LISP code to perform static analysis, get LISP to emit x86 assembly etc. etc. But, IMHO, this doesn’t make LISP the language I’m talking about. It makes it a substrate to build that language on.

                                                                                    2. 2

                                                                                      I think one major difference between math and computer science, and why we’re not going to see a lot of consolidation for a while (not even in 30 years, I don’t think), is that code that’s on the internet has a way of sticking around, since it’s doing more than just sitting in research papers, or providing a tool for a single person.

                                                                                      I doubt we’ll see 100% consolidation any time soon, if for no reason than that it’s too easy to create a new programming language for that to be the case.

                                                                                      Hardware changes might shake up this list, but I think it’ll take 30 years for that to be realized, but there will be a lot of programming languages that fall out of that.

                                                                                      We’re definitely still going to have COBOL in 30 years, and Java, and C. The rest, I’m unsure of, but I’ll bet that we’ll be able to recognize the lineage of a lot of the top 30 when we look in 30 years.

                                                                                      1. 1

                                                                                        R seems like a write-and-forget language. But it roots to Iverson’s notation.

                                                                                        Did you mean to write J or APL? I understand R as the statistics language.

                                                                                      1. 6

                                                                                        Government jobs tend to be 40 hours or less. State government in my state has a 37.5 hour standard. There is very occasional off-hours work, but overtime is never required except during emergencies – and not “business emergencies”, but, like, natural disasters.

                                                                                        1. 8

                                                                                          I’m surprised that tech workers turn up their nose at government jobs. Sure, they pay less, but the benefits are amazing! And they really don’t pay too much less in the scheme of things.

                                                                                          How many private sector tech jobs have pensions? I bet not many.

                                                                                          1. 9

                                                                                            I work in a city where 90% of the folks showing up to the local developer meetup are employed by the city or the state.

                                                                                            It’s taken a lot of getting used to being the only person in the room who doesn’t run Windows.

                                                                                            1. 4

                                                                                              I feel like this is pretty much the same for me (aside from the meetup bit).

                                                                                              Have you ever worked with windows or have you been able to stay away from it professionally?

                                                                                              1. 3

                                                                                                I used it on and off for a class for about a year in 2003 at university but have been able to avoid it other than that.

                                                                                              2. 1

                                                                                                Yeah. I hadn’t used Windows since Win 3.1, until I started working for the state (in the Win XP era). I still don’t use it at home, but all my dayjob work is on Windows, and C#.

                                                                                              3. 5

                                                                                                they pay less

                                                                                                Not sure about this one. When you speak about pay, you also have to count all the advantages going with it. In addition, they usually push you out at 5pm so your hourly rate is very close to the contractual one.

                                                                                                1. 3

                                                                                                  Most people who are complaining that they pay less are the tech workers who hustle hard in Silicon Valley or at one of the big N companies. While government jobs can pay really well and have excellent value especially when considered pay/hours and benefits like pensions, a Google employee’s ceiling is going to be way higher.

                                                                                                  There’s a subreddit where software engineers share their salaries and it seems like big N companies can pay anything from $300k–700k USD when you consider their total package. No government job is going to match that.

                                                                                                2. 3

                                                                                                  Do you work in the public sector? What’s it like?

                                                                                                  1. 13

                                                                                                    I do.

                                                                                                    Pros: hours, and benefits. Less trend-driven development and red queen effect. Less age discrimination (probably more diversity in general, at least compared to Silicon Valley).

                                                                                                    Cons: low pay, hard to hire and retain qualified people. Bureaucracy can be galling, but I imagine that’s true in large private sector organizations, too.

                                                                                                    We’re not that behind the times here; we’ve avoided some dead-ends by being just far enough behind the curve to see stuff fail before we can adopt it.

                                                                                                    Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.

                                                                                                    1. 6

                                                                                                      I will say, I once did a contract with the Virginia DOT during Peak Teaparty. Never before in my life have I seen a more downtrodden group. Every single person I talked to was there because they really believed in their work, and every single one of them was burdened by the reality that their organization didn’t and was cutting funding, cutting staff, and cutting… everything.

                                                                                                      They were some of the best individuals I ever worked with, but within the worst organization I’ve ever interacted with.

                                                                                                      Contrast that to New York State- I did a shitton of work for a few departments there. These were just folks who showed up to get things done. They were paid well, respected, and accomplished what they could within the confines of their organization. They also were up for letting work knock off at 2PM.

                                                                                                      1. 2

                                                                                                        Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.

                                                                                                        Agreed. There’s no such thing as an ethical corporation.

                                                                                                        Do you mind sharing the minimum qualifications of a candidate at your institution? How necessary is a degree?

                                                                                                        I’m asking for a friend 😏

                                                                                                        1. 2

                                                                                                          What about B corps?

                                                                                                          1. 1

                                                                                                            No, not even them.

                                                                                                            When you think about what “profit” is (ie taking more than you give), I think it’s really hard to defend any for-profit organization. Somebody has to lose in the exchange. If it’s not the customers, it’s the employees.

                                                                                                            1. 5

                                                                                                              That’s a pretty cynical view of how trade works & not one I generally share. Except under situations of effective duress where one side has lopsided bargaining leverage over the other (e.g. monopolies, workers exploited because they have no better options), customers, employees and shareholders can all benefit. Sometimes this has negative externalities but not always.

                                                                                                              1. 1

                                                                                                                Then I guess we must agree to disagree 🤷🏻‍♂️

                                                                                                              2. 2

                                                                                                                Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.

                                                                                                                Reciprocity is a natural law: markets function because both parties benefit from the exchange. As a nod to adsouza’s point: fully-informed, warrantied, productive, voluntary exchange makes markets.

                                                                                                                Profit exists because you can organize against risk. Due to comparative advantage, you don’t even have to be better at it than your competitors. Voluntary exchange benefits both weaker and stronger parties.

                                                                                                                1. 1

                                                                                                                  Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.

                                                                                                                  I mean, yes, I was repeating myself. I wasn’t concluding anything: I was merely rephrasing “profit.” I’m not sure what you’re trying to get at here aside from fishing for a logical fallacy.

                                                                                                                  a tautology. i.e., meaningless repetition.

                                                                                                                  Intentionally meta?

                                                                                                                  Reciprocity is a natural law

                                                                                                                  Yup. No arguments here. However, reciprocity is not profit. In fact, that’s the very distinction I’m trying to make. Reciprocity is based on fairness and balance, that what you get should be equal to what you give. Profit is expecting to get back more than what you put in.

                                                                                                                  Profit exists because you can organize against risk.

                                                                                                                  Sure, but not all parties can profit simultaneously. There are winners and losers in the world of capitalism.

                                                                                                                2. 1

                                                                                                                  So, if I watch you from afar and realize that you’ll be in trouble within seconds, come to your aid, and save your life (without much effort on my side) in exchange for $10, who’s the one losing in this interaction? Personally, I don’t think there’s anything morally wrong with playing positive-sum games and sharing the profits with the other parties.

                                                                                                              3. 1

                                                                                                                For an entry-level developer position, we want either a batchelor’s degree in an appropriate program, with no experience required, an associate’s degree and two years of experience, or no degree and four years of experience. The help-desk and technician positions probably require less for entry level but I’m not personally acquainted with their hiring process.

                                                                                                                1. 2

                                                                                                                  I would fall into the last category. Kind of rough being in the industry for 5 years and having to take an entry level job because I don’t have a piece of paper, but that’s how it goes.

                                                                                                                  1. 2

                                                                                                                    For us, adding an AS (community college) to that 5 years of experience would probably get you into a level 2 position if your existing work is good. Don’t know how well that generalizes.

                                                                                                                    1. 2

                                                                                                                      Okay cool! I have about an AS in credits from a community college I’d just need to graduate officially. Though, at that point, I might as well get a BS.

                                                                                                                      Thanks for helping me in my research :)

                                                                                                            2. 4

                                                                                                              I don’t, but I’m very envious of my family members who do.

                                                                                                              One time my cousin (works for the state’s Department of Forestry) replied to an email on Sunday and they told him to take 4 hours off Monday to balance it off.

                                                                                                              That said, from a technological perspective I’d imagine it would be quite behind in times, and moves very slowly. If you’re a diehard agile manifesto person (I’m not) I probably wouldn’t recommend it.

                                                                                                              EDIT: I guess it’s really what you value more. In the public sector, you get free time at the expense of money. In the private sector, vice versa. I can see someone who chases the latest technologies and loves to code all day long being miserable there, but for people who just code so they can live a fulfilling life outside of work it could be a good fit.

                                                                                                        1. 41

                                                                                                          It’s also developer-friendly because of its excellent wiki.

                                                                                                          I learned Linux doing everything by hand on a Slackware system, then moved to Ubuntu after ~8 years when I realized I’d stopped learning new things. Then a couple years ago I realized I didn’t understand how a bunch of things worked anymore (systemd, pulseaudio, Xorg, more). I looked at various distros and went with Arch because its wiki had helped me almost every time I’d had an issue.

                                                                                                          Speaking of distros, I’m currently learning Nix and NixOS. It’s very nice so far. If I can learn to build packages I’ll probably replace lobsters-ansible with it (the recent issues/PRs/commits tell a tale of my escalating frustration at design limitations). Maybe also my personal laptop: I can experiment first with using nix to try xmonad first because it’s mostly configured by editing + recompiling) and deal with python packaging, which has never worked for me, then move completely to NixOS if that goes well.

                                                                                                          1. 9

                                                                                                            I switched from Mac to NixOS and couldn’t be happier. At work we use Nix for building Haskell projects as well.

                                                                                                            1. 9

                                                                                                              The Arch wiki actually seems to be the only good documentation for using the advanced functionality of newer freedesktop components like pulseaudio, or much older software like Xorg.

                                                                                                              But I’ve noticed it’s documentation for enterprise software like ZFS is usually hot garbage. Not surprising given the community. The recommendations are frequently hokey nonsense: imaginary micro-optimizations or blatantly incorrect feature descriptions.

                                                                                                              What do you find better about nix for making packages than, say, making an rpm or deb? I’ve found those package systems valuable for large scale application deployment. Capistrano has also been nice for smaller scale, with its ability to deploy directly from a repo and roll back deployments with a simple symlink swap. And integration libraries are usually small enough that I’m comfortable just importing the source into my project and customizing them, which relieves so many minor tooling frustrations overall.

                                                                                                              Of course in the end the best deployment system is the one you’ll actually use, so if you’re excited about packaging and deploying with nix, and will thus devote more time and energy to getting it just right, then that’s de facto the best option.

                                                                                                              1. 3

                                                                                                                What do you find better about nix for making packages than, say, making an rpm or deb?

                                                                                                                I don’t, yet. The “If I can learn to build packages” sentence links to an issue I’ve filed. I was unable to learn how to do so from the official documentation. I’ve almost exclusively been working in languages (PHP, Python, Ruby, JavaScript) that rpm/deb have not had good support for, prompting those languages to each implement their own package management systems that interface poorly or not at all with system packaging.

                                                                                                                I’ve used Capistrano, Chef, Puppet, and currently use Ansible for deployment. Capistrano and Ansible at least try to be small and don’t have a pretensions to being something other than an imperative scripting tool, but I’ve seen all of them break servers on deployment, let servers drift out of sync with the config, or fail to be able to produce new deployments that match the existing one. Nix/NixOS/NixOps approach the problem from a different direction; it looks like they started from what the idea of system configuration is instead of scripting the manual steps of maintaining one. Unfortunately nix replicates the misfeature of templating config files and providing its own config file on top of them instead of checking complete config files into a repo. Hopefully this won’t be too bad in practice, though it’s not a good sign that they implemented a programming language.

                                                                                                                I appreciate your closing sentiment, but I’m not really trying to reach new heights of system configuration. I’m trying to avoid losing time to misconfiguration caused by services that fundamentally misunderstand the problem, leading to booby traps in common usage. I see almost all of my experience with packaging + deployment tools as a loss to be minimized in the hopes that they waste less time than hand-managing the global variables of public mutable state that is a running server.

                                                                                                                1. 1

                                                                                                                  Hmmm. I don’t think the problems you listed are 100% avoidable with any tool, just easier in some rather than others.

                                                                                                                  I like Puppet and Capistrano well enough. But I also think packaging a Rails application as a pre-built system package is definitely the way to go, with all gems installed and assets compiled at build time. That at least makes the app deployment reproducible, though it does nothing for things like database migrations.

                                                                                                                2. 1

                                                                                                                  What do you find better about nix for making packages than, say, making an rpm or deb?

                                                                                                                  Let me show you a minimal nix package:

                                                                                                                  pkgs.writeScriptBin "greeter" "echo Hello $1!"
                                                                                                                  

                                                                                                                  Et voila! You have a fine nix package of a utility called greeter that you can let other nix packages depend on, install to your environment as a user or make available in nix-shell. Here’s a function that returns a package:

                                                                                                                  greeting: pkgs.writeScriptBin "greeter" "echo ${greeting} $1!"
                                                                                                                  

                                                                                                                  What you have here is a lambda expression, that accepts something that you can splice into a string and returns a package! Nix packages in nixpkgs are typically functions, and they offer an a great amount of customizability without much effort (for both the author and the user).

                                                                                                                  At work, we build, package and deploy with nix (on the cloud and on premises), and we probably have ~1000 nix packages of our own. Nobody is counting though, since writing packages doesn’t feel like a thing you do with nix. Do you count the number of curly braces in your code, for instance? If you’re used to purely functional programming, nix is very natural and expressive. So much so that you could actually write your application in the language if it’s IO system were designed for it.

                                                                                                                  It also helps a lot that nix can seamlessly be installed on any Linux distro (and macOS) without getting in the way of its host.

                                                                                                                  1. 1

                                                                                                                    If only ZFS from Oracle hadn’t had the licensing compatibility issues it currently has, it would probably have landed in the kernel by now. Subsequently, the usage would have been higher and so would the quality of the community documentation.

                                                                                                                  2. 4

                                                                                                                    If I can learn to build packages I’ll probably replace lobsters-ansible with it

                                                                                                                    Exactly. I don’t have much experience with Nix (none, actually). But in theory it seems like it can be a really nice OS-level replacement for tools like Ansible, SaltStack, etc.

                                                                                                                    1. 1

                                                                                                                      This is exactly what NixOps does! See here.

                                                                                                                      1. 2

                                                                                                                        Thanks for the video. I’ll watch it over the weekend!

                                                                                                                        Curious - are you also running NixOS on your personal machine(s)? I’ve been running Arch for a long time now but considering switching to Nix just because it makes so much more sense. But the Arch documentation and the amount of packages available (if you count the AUR in) is something that’s difficult to leave.

                                                                                                                        1. 1

                                                                                                                          Yes, I’m using it on my personal machine :). I wouldn’t recommend switching to NixOS all at once, what worked for me was to install the Nix package manager, use it for package management and creating development environments, and then only switch once I was fully convinced that NixOS could do everything I wanted from my Ubuntu install. This took me about a year, even with me using it for everything at work. Another approach would be to get a separate laptop and put NixOS on that to see how you like it.

                                                                                                                          1. 1

                                                                                                                            Interesting. I’ll try it out for some time on a VM to get a hang of it. Thanks for the info!

                                                                                                                    2. 3

                                                                                                                      Even as a Ubuntu user, I’ve frequently found the detailed documentation on the Arch wiki really helpful.

                                                                                                                      1. 2

                                                                                                                        I really want to use Nix but I tried installing it last month and it doesn’t seem to have great support for Wayland yet which is a deal breaker for me as I use multiple HiDPI screens and Wayland makes that experience much better. Anyone managed to get Nix working with Wayland?

                                                                                                                        1. 2

                                                                                                                          Arch’s wiki explaining how to do everything piecemeal really seems strange given its philosophy is assuming their users should be able to meaningfully help fix whatever problems cause their system to self-destruct on upgrade. It’s obviously appreciated, but still…confusing, given how many Arch users I’ve met who know nothing about their system except what the wiki’s told them.

                                                                                                                          1. 1

                                                                                                                            I gave up on my nix experiment, too much of it is un- or under-documented. And I’m sorry I derailed this Arch discussion.

                                                                                                                            1. 1

                                                                                                                              I’m happy to help if I can! I’m on the DevOps team at work, where use it extensively, and I did a presentation demonstrating usage at linux.conf.au this year. All my Linux laptops run NixOS and I’m very happy with it as an operating system. My configuration lives here.

                                                                                                                              1. 2

                                                                                                                                Ah, howdy again. I’m working my way through the “pills” documentation to figure out what’s missing from the nix manual. If you have a small, complete example of how to build a single package that’d probably be pretty useful to link from the github issue.

                                                                                                                                1. 2

                                                                                                                                  I made a small change to the example to get it to build, and I’ve added it as a comment to your issue.

                                                                                                                            1. 6

                                                                                                                              I think that the view that type systems exist to just enforce rules and check that your programs are correct is very incomplete: type systems are extremely powerful reasoning mechanisms that themselves allow you to express properties of your programs and are a space where you construct them, not just restrict them at the value level. I think Idris is the best example of this, although Haskell might be more accessible and serve as a bridge if you want to go in that direction. I suggest getting the Idris book, currently going through it and it’s extremely well-written!

                                                                                                                              The central idea in Idris are dependent types: essentially they remove the distinction between a type variable and a regular variable, allowing you to say that for example a Matrix 3 4 is a wholly different type than Matrix 4 3, and when you have access to such specific types, a large part of your programming is lifted to the type level.

                                                                                                                              The author still seems to think, for example, that good type systems don’t force you to write annotations unless you really must. In Idris, type annotations are enforced, because they aren’t merely annotations to help the compiler infer other types, but are just the place where you write a large part of your program (albeit dependent types make type inference harder so there’s a technical component to that).

                                                                                                                              1. 1

                                                                                                                                Well, nothing stops you from writing stringly typed code full of mutations using IORefs in Idris. The point with strong type systems is not that you have to write safe code, it’s that you can do so.

                                                                                                                              1. 1

                                                                                                                                “all you need to annotate are function parameters and return values” - true in C++ now too, it’s not just Rust.

                                                                                                                                “gtest sucks” - it does, but there are far better alternatives. I agree that pytest rocks. I’m curious as to whether dependency injection and mocking are better in Rust than in C++, especially given the lack of compile-time reflection.

                                                                                                                                1. 3

                                                                                                                                  In my experience C++ generally requires more annotation of types within a function body, so it is still fair to call out annotating only function parameters and return values as a strength of Rust in particular.

                                                                                                                                  For example in Rust:

                                                                                                                                  // Within the same function body we push a `&str` into the vector
                                                                                                                                  // so compiler understands this must be a `Vec<&str>`.
                                                                                                                                  let mut vec = Vec::new();
                                                                                                                                  vec.push("str");
                                                                                                                                  

                                                                                                                                  versus C++:

                                                                                                                                  // Vector element type cannot be inferred.
                                                                                                                                  std::vector<const char *> vec;
                                                                                                                                  vec.push_back("str");
                                                                                                                                  
                                                                                                                                  1. 1

                                                                                                                                    C++17 has constructor template argument deduction, so you can just say auto vec = vector({"str"}) now. Though Rust’s type inference is obviously more powerful.

                                                                                                                                1. 3

                                                                                                                                  Working remotely and having flexible working hours seems like two completely different topics to me. I wouldn’t mix them up.

                                                                                                                                  1. 2

                                                                                                                                    Well, not completely different, if remote means anywhere in the world, setting company-wide fixed working hours would be impractical. So the issues aren’t entirely orthogonal.

                                                                                                                                    1. 1

                                                                                                                                      I’d say it completely depends on the job and the team. I work remotely with people all around north america and europe (mostly), and I appreciate knowing when someone will be online to answer a question or handle a request. I see no reason why a remotee shouldn’t be asked to work 9-5 if it’s needed for communication/productivity/other reasons. That is, to me, a different topic from physical location.

                                                                                                                                    2. 1

                                                                                                                                      You certainly can have flexible hours without working remotely, but how many office bound staff have accsss to that office outside of say 6am till 8pm?

                                                                                                                                    1. 4

                                                                                                                                      Honestly, the driving script in bash feels like it’s cheating. It actually feels like despite all the new toys, metaprogramming in C++ still isn’t that powerful.

                                                                                                                                      Couldn’t this be done more easily with lisp macros? I can sort of see how to do it with D compile-time structures.

                                                                                                                                      1. 3

                                                                                                                                        I don’t think there’s much point comparing such exercises across languages. For instance, with Template Haskell, you can run arbitrary Haskell code and even do IO at compile time, you could even write a 3D shooter, but I’d still say C++ templates are more powerful than TH in many aspects, due to the way they interact with the rest of their respective languages.

                                                                                                                                        1. 1

                                                                                                                                          Maybe I shouldn’t have said “powerful”, but “convenient”? I think it does make sense to have these comparisons at least for this example. In both Lisp and D, you have all of the language at compile time, so you can do just about anything.

                                                                                                                                          It appears that even when attempting a ridiculous feat, thus accepting some inconvenience, C++ compile-time features are still too onerous to put the whole game loop into them.

                                                                                                                                          Edit: After thinking about this for a second, I’m not sure it’s possible in D anymore since compile-time D functions have to be deterministic.

                                                                                                                                          1. 2

                                                                                                                                            I understand your point about the convenience, but my point is that the real purpose of the metaprogramming features isn’t to write interactive games. What matters is how it interacts with the run-time features. For instance, C++ templates are more powerful than Template Haskell, because of template argument deduction and due to how template instantiation can cause other templates to be instantiated seamlessly. Whereas in TH, you cause all template expansions by hand. Without considering the interaction with the rest of the language, the best metaprogramming would simply be generating C++ code using C++, then running that program as a preprocessing step. That’s why I think comparing the power of metaprogramming features accross languages through non-metaprogramming things you can do with them is pointless.

                                                                                                                                            1. 1

                                                                                                                                              Ah, it does sound inconvenient in TH to not have automatic instantiations.

                                                                                                                                              1. 1

                                                                                                                                                Yeah, it is, TH is much more bolted-on in Haskell compared to templates in C++, but on the other hand, Haskell’s type system is vastly more powerful without metaprogramming, so you rarely really need it. As I said, hard to compare across languages :)

                                                                                                                                        2. 2

                                                                                                                                          In Lisp you have the full language in disposal at compile-time, so it’s way too easy.

                                                                                                                                          1. 1

                                                                                                                                            That was my first thought, that the actual game loop is still implemented at runtime (with a bash runtime), which is sort of cheating. On the other hand, since one of my research areas is modeling game mechanics in formal logic, it somehow feels natural to accept an implementation of a state->state' transition function as morally equivalent to an implemention of a game. :-)