1. 4

    This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

    It’s just like learning a human language: Until you use the vocabulary enough to get comfortable, you are going to feel lost, and spend a lot of time getting friendly with a dictionary.

    1. 8

      This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

      I think this probably misses the point. The Python solution was able to compose a couple of very general, elementary problem solving mechanisms (iteration, comparison), which python has a very limited vocabulary of (there’s maybe a half dozen control constructs, total?), to quickly arrive at a solution (albeit while a limited, non-parallel solution, one that’s intuitive and perhaps 8 times out of 10 does the job). The standard library might offer an implementation already, but you could get a working solution without crawling through the docs (and you could probably guess the name anyways).

      J required, owing to its overwhelming emphasis on efficient whole-array transformation, selection from a much much much larger and far more specialized set of often esoteric constructs/transformations, all of which have unguessable symbolic representations. The documentation offers little to aid this search, complicating a task that was already quite a bit less intuitive than Python’s naive iteration approach.

      1. 4

        For a long time now, I’ve felt that the APL languages were going to have a renaissance. Our problems aren’t getting any simpler, so raising the level of abstraction is the way forward.

        The emphasis is on whole array transformation seems like a hindrance but imagine for a second that RAM becomes so cheap that you simply load all of your enterprise’s data in memory on a single machine. How many terrabytes is that? Whole array looks very practical then.

        For what it’s worth, there is a scheme to the symbols in J. You can read meaning into their shapes. Look at grade-up and grade-down. They are like little pictures.

        J annoys me with its fork and hook forms. That goes past the realm of readability for me. Q is better, It uses words.

        What I’d like to see is the entire operator set of, say, J brought into mainstream languages as a library. Rich operations raising the level of abstraction are likely more important than syntax.

        1. 4

          J annoys me with its fork and hook forms. That goes past the realm of readability for me.

          In textual form I agree. The IDE has a nice graphical visualization of the dataflow that I find useful in ‘reading’ that kind of composition in J code though. I’ve been tempted to experiment with writing J in a full-on visual dataflow style (a la Max/MSP) instead of only visualizing it that way.

          1. 2

            I find it a lot easier to write J code by hand before copying it to a computer. It’s easier to map out the data flow when you can lay it out in 2D.

            1. 1

              Have you looked at Q yet?

            2. 1

              That would be a very useful comparison of the usability of a compact text syntax vs visual language. I imagine that discoverability is better with a visual language as by definition it is interactive.

            3. 2

              I started implementing the operators in Scala once - the syntax is flexible enough that you can actually get pretty close, and a lot of them are the kind of high level operations that either already exist in Scala are pretty easy to implement. But having them be just a library makes the problem described in the article much much worse - you virtually have to memorize all the operators to get anything done, which is bad enough when they’re part of the language, but much worse when they’re a library that not all code uses and that you don’t use often enough to really get them into your head.

              1. 1

                It could just be a beginning stage problem.

                1. 1

                  It could, but the scala community has actually drawn back from incremental steps in that direction, I think rightly - e.g. scalaz 7 removed many symbolic operators in favour of functions with textual names. Maybe there’s some magic threshold that would make the symbols ok once we passed it, but I can only spend so much time exploring the possibility without seeing an improvement.

                  1. 1

                    Oh. I’d definitely go with word names. Q over J. To me, the operations are the important bit.

          2. 5

            The difference is that Python already is organized by the standard library, and has cookbooks, and doesn’t involve holistically thinking of the entire transformation at once. So it intrinsically has the problem to a lesser degree than APLs do and also has taken steps to fix it, too.

            1. 2

              How easy is it to refactor APL or J code? The reason I ask is that I have the same problem with the ramdajs library in JavaScript, which my team uses as a foundational library. It has around 150 functions, I don’t remember what they all do and I certainly don’t remember what they’re all called, so I often write things in a combination of imperative JS and ramda then look for parts to refactor. I’m interested to hear whether that’s possible with APL, or whether you have to know APL before you can write APL.

          1. 1

            This kind of arguments against APL family always feels odd. You can say APL lacks modern FFI. You can say APL lacks variable scopes. You can say APL’s workspace feels arcane. But the symbols/functions/operators?

            If you native tongue is English, how long do you expect to spend to write a sentence in Farsi by using a dictionary only?

            1. 5

              I think this argument is limited to J: k/q and other APLs like Dyalog don’t have this problem and the author admits they don’t know those languages. At the end of the day the author is calling for more/better documentation, and I don’t think either of those things would make J worse.

              See, most people learn to write by reading lots of things.

              Programmers however, learn to program by programming lots of things.

              The idea is that after 5-10 years of programming a certain way, your brain changes enough so you start thinking that way.

              To that end, it makes sense to try to find ways to make it easier for newbies to learn how to program lots of things.

              Now.

              It’s difficult to look up words in a Chinese dictionary.

              Most Chinese “words” are made up of a few radicals.

              If you can identify them, you can look up the number of strokes in the radical on a table to get an index. You combine all of the indexes together to find what page to look up.

              If you can’t identify them, you can try looking it up by stroke count. There are lots of “words” with twelve strokes, so this can take a lot of time.

              It might not be possible to improve on either of these (there’s also a corners-method, but it’s frustrating to learn) which puts Chinese at a disadvantage in recruitment.

              Showing people things they can do and say in J that is so much shorter and faster and with fewer bugs than other languages works only so far as other programmers believe those things are important, and I don’t think most programmers believe those things are important enough to be worth an extra 5-10 years of learning J or Chinese.

              1. 1

                Saw this in another comment, but it doesn’t really make sense to compare J to Chinese. Natural languages are optimized for single-pass parsing and have lots of redundancy for error correction. Consider how much worse your understanding becomes when you don’t know / misread one logogram vs. even one character in a J program.

                1. 1

                  It’s about the same as when I read Chinese. Some of the reasons are in my other comment.

                  I found this thread of people talking in music absolutely fascinating.

              2. 4

                I can’t afford to completely stop being productive at work, and I don’t want to spend personal time learning a new language. The kind of code I write today (in Scala) would look completely alien to the me who first picked up Scala 8 years ago, but, crucially, I’ve been able to reach that point incrementally, step by step, spending work time only and remaining productive all the time - even 8 years ago I was able to be as productive in Scala as I had been in Java, and close to as productive as I had been in Python. (And this is why I learnt Scala rather than Haskell - even if the kind of code I’m now writing would be easier in Haskell, I had no way to get there)

                Maybe it’s not possible to make a language that has the advantages of APL and the gradual on-ramp of Scala, but if so I can’t see any way most programmers are ever going to adopt such a language, however good the end state may be.

                1. 4

                  Shops that use less mainstream languages tend to have language courses as part of the onboarding. Advanced places using mainstream languages also tend to have this, extreme’s being Google. I think of using some language is truly as valuable as the company claims it is they are likely to support training during work hours.

                  1. 1

                    Hmm. The place I currently work actually has (or at least had at some point) a significant APL contingent, but I’ve never seen a course offered. Will keep my eyes open.

                  2. 1

                    I think the Jane St position on learning a new language is a nice counterpoint with Ocaml being the language. One point they made (IIRC) is people who could pick it up in onboarding were likely better programmers than those only knowing say Java. Supporting your side are all the languages that are successful because they built on the syntax, basic capabilities, or runtimes of other ones. Scala is a great example of the latter for Java. Go might be another since it’s very approachable for developers of imperative languages.

                    So, if it’s APL, might either make the onboarding easy if going all in, make it a framework/library for an existing language that’s very popular, or DSL for a language supporting macros. I find the latter most interesting with Julia being leading candidate since it already targets numerical sub-field with seemless integration of C and Python code they already use. I’d default on doubting APL would have much productivity over APL DSL in Julia since developers will spend more time thinking and operating on data than writing the code itself. Julia’s code would also be short being dynamic.

                    That’s just my speculations on this. I’d only learn APL to learn the mindset and some useful operators for a DSL or library. We shouldn’t need to go all in with it given current state of programming tooling.

                1. 8

                  This advice should be expanded. Do not under any circumstance use any kind of 3rd party VPN at all.

                  1. 4

                    VPNs are this decade’s antivirus.

                    1. 2

                      This advice is hyperbolic…. there are tons of valid uses for a 3rd party VPN. For example, I use a 3rd party VPN to torrent over networks that punish me for doing so (LTE, university WiFi).

                      1. 1

                        OK… this is not very helpful advice. But, if you have something constructive to say on the subject, I’d like to hear it!

                        I don’t suppose you’re saying that you should just trust your ISP.

                        Perhaps you’re saying that you should set up and maintain your own VPN? Do you have any helpful resources to suggest for those of us who might want to do that? Because I can imagine a few ways to get that wrong too.

                        But perhaps more to the point, what do you suggest for less technical people who are concerned about their privacy, or those who don’t want to maintain that much infrastructure?

                        1. 3

                          I don’t suppose you’re saying that you should just trust your ISP.

                          If you’re using a VPN service, that’s exactly what you’re doing - just trusting the VPN operator.

                          But perhaps more to the point, what do you suggest for less technical people who are concerned about their privacy, or those who don’t want to maintain that much infrastructure?

                          “If you’re telling people not to buy these rocks, what do you suggest for people who are concerned about keeping tigers away and can’t afford fences and guns?”

                          If you genuinely need to access the internet without being tracked, you need to put the legwork in and use Tor; this is not something you can afford to trust someone else to do for you (though there are bundled installers etc. that can make it slightly easier).

                          1. 2

                            Sometimes I trust my VPN operator more than my ISP. Thus using the VPN is nicer

                            Example cases:

                            • being in China
                            • airport wifi
                            1. 1

                              Using tor for many tasks is no harder than a vpn anyway

                            2. 3

                              There was another blog post not too long ago about not using VPNs. This article does state all the reasons to use a VPN: protect your from your ISP and protect your location data.

                              However a VPN isn’t TOR. They can still keep logs on the VPN side and turn them over to police, even in other countries. It has a limited use and people need to understand what those uses are. Too many people use it without understand what VPNs do and don’t do (similar to the confusion around Private Window browsing .. even though there’s a clear wall of text describing the limitations, most people don’t read it).

                            3. 1

                              I’d argue that pretty much anyone who reads this site has the wherewithal to set up their own VPN. Check out Streissand or Algo

                            1. 13

                              Fails to deliver on the promise of an unthinkable thought.

                              1. 4

                                The author seems to be thinking that the Smalltalk programme had some merits that the current PLT programme doesn’t, which I find unthinkable. So it delivered as far as I’m concerned.

                                1. 3

                                  Most tantalizing question was:

                                  ..when and why did we start calling programming languages “languages”?

                                  1. 3

                                    That seems neither unthinkable nor unanswerable. I mean, I don’t know the answer off hand, but there’s a finite number of papers one can read to find out.

                                    1. 1

                                      Yup. I wasn’t disagreeing with you.

                                    2. 1

                                      The same reason earlier formal languages like predicate logic are called languages.

                                      1. 1

                                        Uhh, citation required?

                                        1. 2

                                          Do you not think computer languages are formal languages? Do I need a citation if I say English is a natural language?

                                          1. 1

                                            Ah, that link/term is helpful. Thanks!

                                            I’m sorry I’m annoying you.

                                            Do I need a citation if I say English is a natural language?

                                            No, but the whole point under discussion is why our terminology connects formal languages with natural languages. When did the term “formal language” come to be? The history section in your Wikipedia link above mentions what the term can be applied to, but not when the term was coined. Was it coined by Chomsky at the dawn the field of mathematical linguistics? That’s not before computers, in which case the causality isn’t quite as clear and obvious as you make it sound.

                                            I’ll stop responding now, assuming you don’t find this as interesting as I do.

                                            Edit: wait, clicking out from your profile I learn that you are in fact a linguist! In which case I take it back, I’m curious to hear what you know about the history of Mathematical Linguistics.

                                            1. 3

                                              Was it coined by Chomsky at the dawn the field of mathematical linguistics?

                                              It’s at least older than that. The term “formal language theory” in the sense of regular languages, context-free grammars etc. does date to Chomsky. But the idea that one might want to invent a kind of “formal” language for expressing propositions that’s more precise than natural languages is older. One important figure making that argument was Gottlob Frege, who was also an early user of the term (I’m not sure if he actually coined it). He wrote an 1879 book entitled Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, which you could translate as something like, Concept-Script, a formal language modeled on that of arithmetic, for pure thought.

                                              1. 1

                                                Thanks a lot for that elaboration on the Frege link!

                                              2. 2

                                                In general they’re all languages because they have a syntax (certain combinations are ‘ungrammatical’ or produce interpreter/compiler errors) and a (combinatorial) semantics (the basic symbols have meaning and there are rules for deriving the meaning of [syntactic] combinations of symbols).

                                                Formal languages go back at least to Frege’s Begriffsschrift of 1879, which isn’t before Babbage described the Analytical Engine (1837) but certainly before digital computers. And there are precursors like Boole’s Logic and Leibniz also worked on something of the same sort, and there are yet earlier things like John Wilkins’ “philosophical language” and other notions of a similar kind.

                                                For modern linguistic work on semantics, the work of Richard Montague is perhaps the most important, and there are connections to computer science from very early on - Montague employs Church’s lambda calculus (from the 1930s) which also underlies Lisp.

                                      2. 1

                                        Nothing that extreme, no, but I’d say it delivers on questioning rarely-questioned assumptions.

                                        1. 1

                                          How would you know? Maybe you simply failed to think it.

                                        1. 14

                                          This is often undervalued, but shouldn’t be! Moore’s Law doesn’t apply to humans, and you can’t effectively or cost efficiently scale up by throwing more bodies at a project. Python is one of the best languages (and ecosystems!) that make the development experience fun, high quality, and very efficient.

                                          As a Python programmer, this is a perspective that has never entirely made sense to me. Well, I should say hasn’t made sense to me for the last few years, at least. I feel like many people have this held-over dichotomy in their heads where Python is expressive and enjoyable, and thus one can write production code quickly, whereas other languages are not expressive and not enjoyable and thus code takes a long time to write. But while this might have been true in the past—while your performant options in the past might have all been some variation on fighting with the compiler, diagnosing obscure compilation errors, waiting for interminable builds—none of those are actually hallmarks of development in a typed, performant language anymore (except for C++). But modern compilers are fast, languages like Nim and D and Haskell are expressive and have powerful type inference. And generally speaking we are now in an era where a type system is not just a necessary evil for a compiler that’s too stupid to know how to interpret any variable without being explicitly told; they are universally recognized to be programmer aids, helping in writing correct code as well as performance. Without wading into the types vs tests debate, at the very least there is one—at the very least there’s a recognition that type systems, too, are for making the devlopment experience high quality and very efficient.

                                          If I were being cynical I would say that sometimes arguments like this feel like it’s really mostly about the “fun” part. That “programmer happiness” part, which is often conflated with programmer efficiency and expressiveness, but isn’t actually the same. It can almost feel like a hostage job—“I better enjoy the language I’m writing in, otherwise I couldn’t possibly be productive in it!”

                                          1. 8

                                            I find typed/compiled languages more fun actually, even C++. Because it drives me absolutely fucking bonkers to run a program and get a really stupid type error, fix, re-run, and get another type error. The compiler just tells you all the type/syntax problems up front and you can fix all of them with minimal rage.

                                            1. 6

                                              yeah, mypy and typescript have been a boon to productivity. Especially strict null checks.

                                              The advantages of the weaker languages is not having to play the “I have to make containers for all my thingy” games. Sometimes just a tuple is nice.

                                              Some of the existing typed languages don’t always follow the “if it’s conceptually simple, or if it’s easy for a computer to do, it should be simple in practice” rule. Especially when you’re crossing library boundaries and now spending a bunch of time marshalling/unmarshalling (sometimes necessary of course!) functionally equivalent stuff.

                                              Devil in the details of course

                                            2. 6

                                              I think your confidence in compilers is perhaps misplaced. It’s not just a matter of speed–other factors, like memory usage and even ability to compile period are relevant.

                                              none of those are actually hallmarks of development in a typed, performant language anymore (except for C++).

                                              I’d argue that the only widely-used performant typed language is C++, possibly Fortran (thought rust is getting close).

                                              The reason for this is that the farther you get into the problem domain (and the more comfortable it is for you), the farther you move away from actual silicon running instructions. It’s not a false dichotomy.

                                              The best-performing code will be written in assembly, but it’ll be terrible to deal with as a human (because we aren’t computers). The most comfortable code will be written in a high-level language (ultimately taken to extreme of “hey, grad student, write me a program to do X”), which is exactly not what runs on silicon.

                                              1. 4

                                                I think your confidence in compilers is perhaps misplaced.

                                                Now include python on the same plot, and the axes will stretch so far that GHC will look indistinguishable from GCC.

                                                the farther you get into the problem domain (and the more comfortable it is for you), the farther you move away from actual silicon running instructions. It’s not a false dichotomy.

                                                It’s only a true dichotomy if the human is better at telling the silicon how to implement the problem than the compiler is, which gets less true every day. It’s already the case that GCC will often beat hand-coded assembly when trying to solve the same problem. And my experience is that on real business-sized problems with ordinary levels of programmer skill and limited time available to produce an optimised solution, Haskell will often comfortably outperform C++.

                                                The best-performing code will be written in assembly, but it’ll be terrible to deal with as a human (because we aren’t computers).

                                                These days assembly is a long way away from reflecting what the actual silicon does. To first order the only thing that matters for performance these days is how well you’re using the cache hierarchy, and that’s not visible in assembly code; minor tweaks to your assembly can lead to radically different performance characteristics.

                                            1. 4

                                              I find that grateful customers (internal or external) are what makes my work meaningful. Right now I’m working in a large financial institution, on what’s broadly speaking risk analysis; the value of this stuff is diffuse and non-obvious, but I believe the people I’m helping are ultimately doing good for humanity, and certainly they appreciate me improving the tools they use to do their work. Previously I’ve had satisfying jobs in adtech, insurance, and telecoms. The job where I had the most direct belief in the company mission was last.fm, but I found that the least satisfying place to work in practice.

                                              1. 18

                                                Some feedback:

                                                I have seen many oil shell posts, but still don’t know what the heck the actual OIL language looks like.

                                                1. 4

                                                  OK thanks, maybe I should link these at the very top of the “A Glimpse of Oil”:

                                                  http://www.oilshell.org/blog/tags.html?tag=osh-to-oil#osh-to-oil

                                                  They are linked somewhere in the middle, which is probably easy to miss.

                                                  It’s sort of on purpose, since Oil isn’t implemented yet, as I mention in the intro. But I think those posts give a decent idea of what it looks like (let me know if you disagree).

                                                  1. 7

                                                    I’ve seen your posts and hat around and never really understood what Oil was really about, but this link is really wonderful. The comparison to shell, the simplifications, the 4 different languages of shell vs the two of Oil, it all really clicked. Really cool project.

                                                    1. 3

                                                      I agree with the others. Until I see what’s your vision for the language, I’m not motivated to get involved.

                                                      The only example you give contains “if test -z $[which mke2fs]”, which can’t be what you’re aiming at.

                                                      IMHO If you really want Oil to be easy to use, you should take as much syntax from Python or Javascript as you can. And use similar semantics too.

                                                      1. 11

                                                        I’m willing to be convinced that a new syntax would be better for shell programming.

                                                        I’m not very confident that moving towards an existing non-shell scripting language will get us there.

                                                        The problem I have with writing shell programs in some non-shell language is that I expect to keep using the same syntax on the command line as I do in scripts I save to disk, and non-shell languages don’t have the things that make that pleasant. For example, a non-shell language has a fixed list of “words” it knows about, and using anything not on that list is a syntax error. That’s great in Python, where such a word is almost certainly a spelling error, but in a shell, most words are program names and I don’t want my shell constantly groveling through every directory in my $PATH so it knows all my program names before I try to use them.

                                                        I’ve also never seen a non-shell language of any type with piping and command substitution as elegant as bash and zsh, but I’m willing to be convinced. I’m afraid, though, anyone in the “Real Language” mindset would make constructions such as diff <(./prog1 -a -b) <(./prog1 -a -c) substantially more verbose, losing one of the main reasons we have powerful shells to begin with.

                                                        1. 3

                                                          Yes it has to be a hybrid. I talk a little about “command vs expression” mode in the post. I guess you’ll have to wait and see, but I’m aware of this and it’s very much a central design issue.

                                                          Of course “bare words” behave in Oil just as they do in bash, e.g.

                                                          echo hi
                                                          ls /
                                                          

                                                          I will not make you type

                                                          run(["echo", "hi"])
                                                          

                                                          :-)

                                                          One of the reasons I reimplemented bash from scratch is to be aware of all the syntactic issues. Process substitution should continue to work. In fact I’ve been contemplating this “one line” rule / sublanguage – that is, essentially anything that is one line in shell will continue to work.

                                                          Also, OSH and Oil will likely be composed, and OSH already implements the syntax you are familiar with. This is future work so I don’t want to promise anything specific, but I think it’s possible to get the best of both worlds – familiar syntax for interactive use and clean syntax for maintainable programs.

                                                          1. 1

                                                            For example, a non-shell language has a fixed list of “words” it knows about, and using anything not on that list is a syntax error. That’s great in Python, where such a word is almost certainly a spelling error, but in a shell, most words are program names and I don’t want my shell constantly groveling through every directory in my $PATH so it knows all my program names before I try to use them.

                                                            tclsh is an interesting example of not having this problem.

                                                            I’m afraid, though, anyone in the “Real Language” mindset would make constructions such as diff <(./prog1 -a -b) <(./prog1 -a -c) substantially more verbose, losing one of the main reasons we have powerful shells to begin with.

                                                            You get constructs that look like this in pipey libraries for functional languages (the likes of fs2 or conduit), though they’re controversial.

                                                            1. 1

                                                              Well put. There’s also loads of muscle memory built up that is hard to leave behind. That point keeps me off of fish; I like almost everything else about it, but I don’t see why it can’t have a separate bash-like syntax.

                                                            2. 2

                                                              OK that’s fair. I’m on the fence about outside contributions – some people have contributed, but I think most projects have more users and more “bones” before getting major contributions. I’m really looking for people to test OSH on real shell scripts, not necessarily adopt it or contribute. (although if you can figure out the code, I applaud you and encourage your contributions :) )

                                                              As I mention in the post, the OSH language is implemented (it runs real shell scripts), but Oil isn’t.

                                                              There will be a different way to test if a string is empty, but for the auto-conversions, if you have [ -z foo ], it will become test -z foo. The auto-conversion is going to make your script RUN, not make it idiomatic.

                                                              As far as appearance, you can definitely think of Oil as a hybrid between shell and Python/JavaScript.

                                                              I can probably write up a cheatsheet for those curious. I haven’t really done so because it feels like promising something that’s not there. But since I’ve written so many blog posts, it might be worth showing something in the style of:

                                                              https://learnxinyminutes.com/docs/bash/

                                                          2. 0

                                                            Yes and I don’t think I’ll care about it until I do. It could look like APL for all we know.

                                                          1. 24

                                                            “There are a lot of CAs and therefore there is no security in the TLS CA model” is such a worn out trope.

                                                            The Mozilla and Google CA teams work tirelessly to improve standards for CAs and expand technical enforcement. We remove CAs determined to be negligent and raise the bar for the rest. There seems to be an underlying implication that there are trusted CAs who will happily issue you a google.com certificate: NO. Any CA discovered to be doing something like this gets removed with incredible haste.

                                                            If they’re really concerned about the CA ecosystem, requiring Signed Certificate Timestamps (part of the Certificate Transparency ecosystem) for TLS connections provides evidence that the certificate is publicly auditable, making it possible to detect attacks.

                                                            Finally, TLS provides good defense in depth against things like CVE-2016-1252.

                                                            1. 13

                                                              Any CA discovered to be doing something like this gets removed with incredible haste.

                                                              WoSign got dropped by Mozilla and Google last year after it came to light that they were issuing fraudulent certificates, but afaict there was a gap of unknown duration between when they started allowing fraudulent certs to be issued and when it was discovered that they were doing so. And it still took over six months before the certificate was phased out; I wouldn’t call that “incredible haste”.

                                                              1. 2

                                                                I’m not sure where the process is, but if certificate transparency becomes more standard, I think that would help with this problem.

                                                              2. 5

                                                                TLS provides good defense in depth against things like CVE-2016-1252.

                                                                Defense in depth can do more harm than good if it blurs where the actual security boundaries are. It might be better to distribute packages in a way that makes it very clear they’re untrusted than to additionally verify the packages if that additional verification doesn’t actually form a hard security boundary (e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS. So a developer who assumed that packages fed into apt had already been validated by the TLS CA ecosystem would be dangerously mislead)

                                                                1. 5

                                                                  This is partly why browsers are trying to move from https being labeled “secure” to http being labeled “insecure” and displaying no specific indicators for https.

                                                                  1. 1

                                                                    e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS

                                                                    If you have this additional complexity in the supply chain then you are going to need additional measures. At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                                                                    1. 5

                                                                      If you have this additional complexity in the supply chain then you are going to need additional measures.

                                                                      Only if you need the measures at all. Does GPG signing provide an adequate guarantee of package integrity on its own? IMO it does, and our efforts would be better spent on improving the existing security boundary (e.g. by auditing all the apt code that happens before signature verification) than trying to introduce “defence in depth”.

                                                                      At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                                                                      Some kind of alternative to HTTPS for obtaining packages is vital, given how easy it is to break your TLS libraries on a linux system through relatively minor sysadmin mistakes.

                                                                1. 1

                                                                  meh, this is really a cat and mouse game. just test it like:

                                                                  if (navigator.webdriver || navigator.hasOwnProperty('webdriver')) {
                                                                    console.log('chrome headless here');
                                                                  }
                                                                  

                                                                  And there goes the article until the author can find a way to bypass this now…

                                                                  1. 6

                                                                    The point of the article is sort of that it’s a cat and mouse game. The person doing the web browsing is inherently at the advantage here because they can figure out what the tests are and get around them. Making the tests more complicated just makes things worse for your own users, it doesn’t really accomplish much else.

                                                                    const oldHasOwnProperty = navigator.hasOwnProperty;
                                                                    navigator.hasOwnProperty = (property) => (
                                                                      property === 'webdriver' ? false : oldHasOwnProperty(property)
                                                                    );
                                                                    Object.defineProperty(navigator, 'webdriver', {
                                                                      get: () => false,
                                                                    });
                                                                    
                                                                    1. 1

                                                                      Yet there are other ways that surely make it possible for a given time window, like testing for a specific WebGL rendering that chrome headless cannot perform. Or target a specific set of bugs related only to chrome headless.

                                                                      https://bugs.chromium.org/p/chromium/issues/detail?id=617551

                                                                      1. 1

                                                                        Well, eventually you just force people to run Chrome with remote debugging or Firefox with Marionette in a separate X session, mask the couple of vars that report remote debugging, and then you have to actively annoy your users to go any further.

                                                                        I scrape using Firefox (not even headless) with Marionette; I also browse with Firefox with Marionette because Marionette makes it easy to create hotkeys for strange commands.

                                                                        1. 1

                                                                          Even if there were no way to bypass that, don’t you think that you’ve sort of already lost in some sense once you’re wasting your users’ system resources to do rendering checks in the background just so that you can restrict what software people can choose to use when accessing your site?

                                                                          1. 3

                                                                            If headless browser is required to scrape data (and not just requesting webpages and parsing html), then website is already perverse enough. Noone will be suprised more if it would also run webgl-based proof of work before rendering most expensive thief-proof news articles from blob of malbolge bytecode with webgl and logic based on GPU cache timing.

                                                                            1. 1

                                                                              You’re paying a price, certainly. But depending on your circumstances, the benefits might be worth the cost.

                                                                      1. 0

                                                                        I have this horrible horrible feeling that Rust is becoming the new Perl. This all reminds me of when Perl added “object orientation” and things became more confusing and hard to understand for passers by like myself.

                                                                        1. 5

                                                                          Every serious new language needs to be able to solve the c10k problem; we knew Rust would need async I/O sooner or later.

                                                                          What is ugly is the proliferation of macros when a proper general-purpose solution is possible. If you look at the final example from the link, async! is fulfilling exactly the same role as the notorious try!; if the language would adopt HKT they could build a single reusable standard form of “do notation” into the language and reuse it for result, async, option, and many other things: https://philipnilsson.github.io/Badness10k/escaping-hell-with-monads/

                                                                          1. 6

                                                                            It is extremely unclear that a strongly-typed do notation is possible in Rust. It’s also not clear if we’ll ever get HKT directly; GAT gives us equivalent power, but fits in with the rest of stuff more cleanly.

                                                                            1. 3

                                                                              What is GAT?

                                                                              1. 2

                                                                                generic associated types

                                                                            2. 1

                                                                              I think async! will eventually be made into a language feature (a couple of community members have proposals for this), it’s just that we’re experimenting on it as a proc macro, because we can. It’s way more annoying to experiment with features baked into the language.

                                                                            3. 1

                                                                              The only language feature added here is generators (and possibly, async/await sugar), everything else will be a library. Both things are quite common amongst languages; and shouldn’t be too confusing.

                                                                              Everything else listed here is a library that you only deal with when you are doing async stuff. And the complexities listed are largely internal complexities; tokio should be pretty pleasant to work with.

                                                                            1. 3

                                                                              The real problem usually starts when languages which want to establish null safety in their language have to interop with an ecosystem where the distinction between nullable and non-nullable doesn’t exist.

                                                                              Newer languages often have the approach of separating values that can be null from values that cannot be null, but this fails to work when those languages have to interoperate with code where it is unknown whether something can be null or not.

                                                                              This is how language would like to deal with null:

                                                                                     Nullability
                                                                                    /           \
                                                                                   /             \
                                                                              Not Nullable    Nullable
                                                                              

                                                                              But given an ecosystem that doesn’t make this distinction, you end up with something like:

                                                                                                Nullability
                                                                                               /          \
                                                                                              /            \
                                                                                  Known Nullability     Unknown Nullability
                                                                                    /           \
                                                                                   /             \
                                                                              Not Nullable    Nullable
                                                                              

                                                                              It looks enticing for language designers to try and merge the values with unknown nullability into one of the existing categories, but treating values of unknown nullability as nullable – or as non-nullable – both approaches have substantial problems.

                                                                              1. 1

                                                                                Isn’t this the same problem as existentials in general, which any Java-interop language already needs to solve? Unknown nullability is just N forSome { type N[X] <: Nullability[X] } or however you want to write it.

                                                                              1. 2

                                                                                Personally, I really like the approach of null punning. You just bubble null values up the call chain and let the user handle them. This avoids having to pepper checks all over the code which is error prone in languages where the checks are optional, and noisy in those that enforce them. In vast majority of cases I find that I’ll have a series of computations I want to do on a piece of data, and I only care whether it’s null at the start or the end of that chain. This is a good longer write up on the approach.

                                                                                1. 4

                                                                                  and noisy in those that enforce them

                                                                                  It’s not if your language has decent support for them - you can easily do the ‘bubbling up’. And it’s not very often you need things to be nillable, so any cost is mitigated.

                                                                                  1. 3

                                                                                    This is the Objective-C approach, as well, and it’s very nice, once you get used to it.

                                                                                    1. 2

                                                                                      It’s horrible in cases where null isn’t actually a valid value; you get the “cannot read property of undefined” problem where you find out about a failure three modules and two thousands lines away from the actual code problem (often in someone else’s code) and don’t have enough information to find out more. Much better to make invalid states unrepresentable and fail fast rather than going into an invalid state.

                                                                                    1. 2

                                                                                      There was a period of time when I was all about OCaml. I appreciate the purity of Caml and StandardML more though, for whatever reason, just from an aesthetics point (the object-orientedness of OCaml just seems shoehorned in to me).

                                                                                      The sad truth, though, is that in my limited time I have to focus on the stuff I need for work. The only languages I’m truly fluent in anymore are C, Python, SQL, Bourne shell, and…I guess that’s it. I can get around in Lua if I need to, but I haven’t written more than a dozen lines of code in another language in at least five years.

                                                                                      (That’s not to say I don’t love C, Python, SQL, and Bourne shell, because I do.)

                                                                                      I’ve been messing around with Prolog (the first language I ever had a crush on) again just for fun, but I’m worried I’m going to have to put it down because of the aforementioned time issue. Maybe I can start writing some projects at work in ML. :)

                                                                                      1. 7

                                                                                        SML is probably my favorite language. It’s compact enough that you can keep the whole language (and Basis libraries) in your head fairly easily (compared to, say, Haskell, which is a sprawling language). I find strict execution much easier to reason about than lazy, but the functional-by-default nature remains very appealing.

                                                                                        Basically, it’s in a good sweet spot of languages for me.

                                                                                        But, it’s also a dead language. There is a community, but it’s either largely disengaged (busy writing other languages for work), or students who have high engagement but short lifespans. There are a few libraries out there, and some are good but rarely/never updated, and some are not good and rarely/never updated.

                                                                                        I still think it’s a great language to learn, because (as lmmm says) being fluent in SML will make you a better programmer elsewhere. Just know that there aren’t many active resources out there to help you actually write projects and whatnot.

                                                                                        1. 2

                                                                                          Everything that you said, plus one thing: Standard ML, unlike Haskell or OCaml, realistically allows you to prove things about programs — actual programs, not informally described algorithms that programs allegedly implement. Moreoever, this doesn’t need any fancy tools like automatic theorem provers or proof assistants — all you need is simple proof techniques that you learn in an undergraduate course in discrete mathematics and/or data structures and algorithms.

                                                                                          1. 3

                                                                                            Absolutely. I think the niche for languages with a formal specification is fairly small, but it is irreplacable in that niche.

                                                                                            1. 1

                                                                                              Just out of curiosity, do you have any reading recommendations on formal proofs for ML programs?

                                                                                              1. 3

                                                                                                Let me be upfront: When I said “prove” in my previous comment, I didn’t mean “fully formally prove”. The sheer amount of tedious but unenlightening detail contained in a fully formal proof makes this approach prohibitively expensive without mechanical aid. Formal logic does not (and probably cannot) make a distinction between “key ideas” and “routine detail”, which is essential for writing proofs that are actually helpful to human beings to understand.

                                                                                                With that being said, I found Bob Harper’s notes very helpful to get started, especially Section IV, “Programming Techniques”. It is also important to read The Definition of Standard ML at some point to get an idea of the scope of the language’s design, because that tells you what you can or can’t prove about SML programs. For example, the Definition doesn’t mention concurrency except in an appendix with historical commentary. Consequently, to prove things about SML programs that use concurrency, you need a formalization of the specifics of the SML implementation you happen to be using (which, to the best of my knowledge, no existing SML implementation provides).

                                                                                          2. 3

                                                                                            OCaml is yet another mainstream-aiming language full of dirty compromises and even outright design mistakes:

                                                                                            • The types of strict lists, trees, etc. are not really inductive, due to OCaml’s permisiveness w.r.t. what can go on the right-hand side of a let rec definition.
                                                                                            • It has an annoying Common Lisp-like distinction between “shallow” and “deep” equality.
                                                                                            • Moreover, either kind of equality can be used to violate type abstraction.
                                                                                            • Mutation is hardwired into several different language constructs (records, objects), rather than provided as a single abstract data type as it well should be.
                                                                                            • Applicative functors with impure bodies are leaky abstractions.
                                                                                            1. 3

                                                                                              Many complaints about OCaml here are justified in a way (I use it in my day job), so I’ve run into a number of issues myself. It is a complex language, especially the module language.

                                                                                              the object-orientedness of OCaml just seems shoehorned in to me

                                                                                              I think that’s a commonly repeated myth but OCaml OOP is not really like Java. Objects are structural which gives it a quite interesting spin compared to traditional nominal systems, classes are more like templates for objects and the object system is in my opinion not more shoehorned than polymorphic variants (unless you consider those shoehorned as well).

                                                                                              1. 4

                                                                                                …OCaml…(I use it in my day job)

                                                                                                So how’s working at Jane Street? :)

                                                                                                Objects are structural which gives it a quite interesting spin compared to traditional nominal systems…

                                                                                                Oh no, I get that. It’s a matter of having object-oriented constructs at all. It’s like C++ which is procedural and object-oriented, and generic, and functional, and and and. I like my languages single-paradigm, dang it! (I know it’s a silly objection, but I’m sometimes too much of a purist.)

                                                                                              2. 1

                                                                                                I work full-time in Scala, and I credit Paulson with teaching many of the foundations that make me effective in that language. Indeed even when working in Python, my code was greatly improved by my ML experience.

                                                                                                1. 1

                                                                                                  How is Scala? I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

                                                                                                  I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

                                                                                                  1. 2

                                                                                                    How is Scala?

                                                                                                    It’s really nice. More expressive than Python, safer than anything else one could get a job writing.

                                                                                                    I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

                                                                                                    There’s a mismatch but there are libraries at every point along the path, so it gives you a way to get gradually from A to B while remaining productive.

                                                                                                    I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

                                                                                                    Nowadays there are pure-Scala libraries for most things, you only occasionally have to fall back to the Java “FFI”. It made for a clever way to bootstrap the language, but is mostly unnecessary now.

                                                                                                    1. 1

                                                                                                      Very informative, thank you.

                                                                                              1. 13

                                                                                                True and False are names in Python 2; you can change the value of True to False if you’d like (or to any other value for that matter).

                                                                                                Not as much fun as this though:

                                                                                                >>> import ctypes
                                                                                                >>> 
                                                                                                >>> value = 2
                                                                                                >>> ob_ival_offset = ctypes.sizeof(ctypes.c_size_t) + ctypes.sizeof(ctypes.c_voidp)
                                                                                                >>> ob_ival = ctypes.c_int.from_address(id(value)+ob_ival_offset)
                                                                                                >>> ob_ival.value = 3
                                                                                                >>> print 1+1
                                                                                                3
                                                                                                
                                                                                                1. 4

                                                                                                  You can do the same thing in Java too:

                                                                                                  java.lang.reflect.Field f = Class.forName("java.lang.Integer").getDeclaredField("value");
                                                                                                  f.setAccessible(true);
                                                                                                  f.setInt(2, 3);
                                                                                                  System.out.println(java.util.stream.Stream.of(1, 1).reduce(0, Integer::sum));
                                                                                                  
                                                                                                1. 10

                                                                                                  Any post that calls electron ultimately negative but doesn’t offer a sane replacement (where sane precludes having to use C/C++) can be easily ignored.

                                                                                                  1. 10

                                                                                                    There’s nothing wrong with calling out a problem even if you lack a solution. The problem still exists, and brining it to people’s attention may cause other people to find a solution.

                                                                                                    1. 8

                                                                                                      There is something wrong with the same type of article being submitted every few weeks with zero new information.

                                                                                                      1. 1

                                                                                                        Complaining about Electron is just whinging and nothing more. It would be much more interesting to talk about how Electron could be improved since it’s clearly here to stay.

                                                                                                        1. 4

                                                                                                          it’s clearly here to stay

                                                                                                          I don’t think that’s been anywhere near established. There is a long history of failed technologies purporting to solve the cross-platform GUI problem, from Tcl/tk to Java applets to Flash, many of which in their heydays had achieved much more traction than Electron has, and none of which turned out in the end to be here to stay.

                                                                                                          1. 2

                                                                                                            I seriously doubt much of anything, good or bad, is here to stay in a permanent sense

                                                                                                            1. 2

                                                                                                              Thing is that Electron isn’t reinventing the wheel here, and it’s based on top of web tech that’s already the most used GUI technology today. That’s what makes it so attractive in the first place. Unless you think that the HTML/Js stack is going away, then there’s no reason to think that Electron should either.

                                                                                                              It’s also worth noting that the resource consumption in Electron apps isn’t always representative of any inherent problems in Electron itself. Some apps are just not written with efficiency in mind.

                                                                                                        2. 5

                                                                                                          Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

                                                                                                          In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

                                                                                                          1. 4

                                                                                                            Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

                                                                                                            It’s always been insane, you can tell by the fact that those programs “crashing” is regarded as normal.

                                                                                                            In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

                                                                                                            Shipping a cross-platform native app written in Python with PyQt or similar is a royal pain. Possibly no real technical work would be required to make it as easy as electron, just someone putting in the legwork to connect up all the pieces and make it a one-liner that you put in your build definition. Nevertheless, that legwork hasn’t been done. I would lay money that the situation with Smalltalk/Racket/Factor is the same.

                                                                                                            Java Swing has just always looked awful and performed terribly. In principle it ought to be possible to write good native-like apps in Java, but I’ve never seen it happen. Every GUI app I’ve seen in Java came with a splash screen to cover its loading time, even when it was doing something very simple (e.g. Azureus/Vuze).

                                                                                                            1. 1

                                                                                                              Writing C++ has been insane for decades, but not for the reasons you mention. Template metaprogramming is a weird lispy thing that warps your mind in a bad way, and you can never be sane again once you’ve done it. I write C++ professionally in fintech and wouldn’t use anything else for achieving low latency; and I can’t remember the last time I had a crash in production. A portable GUI in C++ is so much work though that it’s not worth the time spent.

                                                                                                            2. 1

                                                                                                              C++ the language becomes better and better every few years– but the developer tooling around it is still painful.

                                                                                                              Maybe that’s just my personal bias against cmake / automake.

                                                                                                          1. 4

                                                                                                            Documents can be rendered onto the web but they’re not first-class citizens there; the fact that it takes the web 2mb of resources to show this simple article is proof of that. At this point the web runs a bunch of code in one language to generate expressions in another language, to describe how to lay out blocks in a third language. And while being declarative and semantic is getting its moment at the top level, at every other level this is purely an operational output format. <p name="85c4" id="85c4" class="graf graf--p graf-after--li">, to take a random example from this page, is not just generated nonsense, that class is clearly generated from something that was itself generated nonsense.

                                                                                                            What we’re reading happens to be a document. At the medium backend level, it actually has an elegant, declarative representation as a document. But at every stage after that, all of that is lost; there’s no way for the reader or the browser to know what’s a heading, a paragraph, a quote, an abbreviation, an excerpt. Custom local stylesheets will not work with the medium page. There is nothing you can do with their bundle of minified javascript except run it. Someone who actually wanted to do something document-oriented with it, like display it with different formatting or read it in a screenreader, would be best served by grabbing it from the Medium API, not what was sent to the web browser. (And Medium is actually one of the better sites in this regard; they’re at least e.g. using <em> for emphasis rather than a cascade of styled <span>s)

                                                                                                            EPUB comes from a nobler time, a time when we imagined that semantic processing was possible. A time when we thought that the idea of the user agent knowing what was a title and what was a chapter heading, and letting the user set their own preferences for how each of them was displayed or treated, was worthwhile. Of course preparing an EPUB is harder than preparing a web page, because rather than just throwing out a pile of nonsense that spits out the right pixels in the right places, you’re expected to actually declaratively describe what your book is, which parts are which, and give semantic tools a chance to operate on your document as a document.

                                                                                                            The web doesn’t have that, not any more. It’s understandable why - 99% of users only care about the pixels, 99% of creators only care about the pixels, and the web’s fast and loose approach gets you the pixels as quickly as possible. In the long term web content is unmaintainable, but 99% of people don’t care about the long term. It’s sad though, and a proper document format on the web - i.e. a cross-site standard that gave tools a way to understand that a header on Medium is in some sense the same thing as a header in the New York Times, and do particular things to headers and not to non-headers - would be a nice thing to have. But the web being what it is, no doubt what we’ll get is a bundle of minified javascript that turns a structured epub into pixels in the right place on the screen, and we’ll call that good.

                                                                                                            1. 1

                                                                                                              You know ePub is just HTML, right? Most of the user agents are a bit more sane (no JS or complex CSS) but the HTML is the same

                                                                                                              1. 2

                                                                                                                XHTML, though the philosophical difference is more important than the technical one. It’s possible to put semantic HTML on today’s web. But no-one will notice, since most HTML out there is non-semantic, so it’s not practical for anyone to make use of the structure of your documents.

                                                                                                            1. 2

                                                                                                              If you’re using backports on top of stable then you’re effectively using a less-popular, less-well-tested variant of testing.

                                                                                                              In theory regular stable releases make sense for a distribution that extensively patches and integrates the software it distributes. But given that Debian’s policy and practices predictably lead to major security vulnerabilities like their SSH key vulnerability, I figure such patching and integrating is worse than useless, and prefer distributions that ship “vanilla” upstream software as far as possible. Such distributions have much less need for a slow stable release cadence like Debian’s, because there’s far less modification and integration to be doing.

                                                                                                              1. 6

                                                                                                                a less-popular, less-well-tested variant of testing.

                                                                                                                Not at all. Going to testing means moving everything to testing. Moving Linux, moving gcc, moving libc. Stable + backports means almost everything is on stable except the things you explicitly move to backports. My current package distribution is:

                                                                                                                stretch: 5323
                                                                                                                stretch-backports: 7
                                                                                                                

                                                                                                                The 7 packages I have from backports are: ldc, liboctave-dev, liboctave4, libphobos2-ldc-dev, libphobos2-ldc72, octave, and octave-common. Just Octave and the LDC compiler for D. Hardly could call them important system packages.

                                                                                                                1. 1

                                                                                                                  It’s worth remembering that the purpose of the computer is to run user programs, not to run the OS. I’d suggest that the programs a user enables backports for are likely to be those programs the user cares most about - precisely the most important packages.

                                                                                                                  1. 5

                                                                                                                    I am running stable because I don’t want to have distracting glitches on the side of the things I actually care about. I have the energy to chase after D or Octave bugs (after all, it’s kind of what I do), so I do want newer things of those. I don’t want to be chasing after Gnome or graphics driver bugs. Those system things get frozen so I can focus on the things I have the energy for.

                                                                                                                    1. 1

                                                                                                                      As a maintainer you’re in a rather unusual position; you’re, in a sense, running Octave for the sake of running Octave. Whereas most people with Octave installed are probably using Octave to do something, in which case Octave bugs would be a serious issue for them, probably more so than bugs in Gnome or graphics drivers.

                                                                                                                2. 4

                                                                                                                  But given that Debian’s policy and practices predictably lead to major security vulnerabilities like their SSH key vulnerability

                                                                                                                  Could you elaborate on that? How do those policies and practices do so predictably? And what would preferable alternatives look like in your opinion?

                                                                                                                  1. 4

                                                                                                                    Making changes to security-critical code without having them audited specifically from a security perspective will predictably result in security vulnerabilities. Preferred alternatives would be either to have a dedicated, qualified security team review all Debian changes to security-critical code, or to exempt security-critical code from Debian’s policy of aggressively patching upstream code to comply with Debian policy. Tools like OpenSSH do by and large receive adequate security review but those researchers and security professional work with the “vanilla” source from upstream; no-one qualified is reviewing the Debian-patched version of OpenSSH and that’s still true even after one of the biggest security vulnerabilities in software history.

                                                                                                                1. 2

                                                                                                                  anyone with more than 5 years of experience is going to find it to be a dissatisfying way to work. There is no place for an actual senior engineer on a Scrum team.

                                                                                                                  Not true, speaking from direct personal experience. Maybe some people don’t like working that way, but if that way is effective, then it’s on them to get over it. “Senior engineers” should be about filling the needs of the business, not about having the most polished plans.

                                                                                                                  Second, if you carry this way of working on for months or years, you get a lot of technical debt and you get low morale.

                                                                                                                  Not my experience. I’ve only ever seen “long-term” approaches introduce more technical debt. Continuous refactoring, the Agile approach, is the most effective way to minimise technical debt.

                                                                                                                  Saying, “I was in the War Room and had 20 minutes each day with the CEO” means that you were important and valued. Saying “I was on a Scrum team” means “Kick me”. On the whole, Agile and Scrum expect you to put your career goals on hold, and people will tolerate that during an exceptional short-term crisis (that merits an actual sprint) but not in perpetuity.

                                                                                                                  Eh. Agile expects you to contribute to the business, which is what businesses ought to reward. If you were looking to get a promotion to a position like “senior developer” or “technical architect” where you could be highly rewarded while destroying the business, then yeah, Agile expects you to give that up. I don’t think that’s unreasonable.

                                                                                                                  Even for people who have nothing to hide– even for people who’d be strong performers in a more sane, relaxed, progress-over-rapidity organization– a surveillance state is an anxiety state. It sucks, and if you’re good at what you do, it’s insulting to have to justify days and sometimes even hours of your time

                                                                                                                  That’s the opposite of my experience. You know what really sucks? Working on something for six months only to find out that what you’ve produced, technically elegant as it may be, is no use to anyone. Delivering something to an actual customer who will thank me for it every two weeks is a much happier way to work.

                                                                                                                  Actual problem employees are already visible to the team and manager regardless of whether this nonsense is used. (In many organizations, nothing is done about them because the organization or circumstances may make them hard to fire, but that’s a separate issue.)

                                                                                                                  If companies that use Agile consistently eliminate incompetent employees and companies that don’t use Agile consistently don’t, surely it’s fair to say that Agile is helping. And if Agile doesn’t make any difference to whether people get fired, why would it cause so much concern?

                                                                                                                  When you start playing the perpetual-crisis/surveillance-state game of Scrum, you end up losing a lot of good people. Either they take the perpetual performance review seriously, get stressed out, and start to falter after a while; or they disengage and become minimum-effort players just trying to protect an income.

                                                                                                                  Or you could just… not? I guess Agile makes the fact that even the best engineers have unproductive weeks more visible to management, but, well, that happens to be the truth; I don’t think concealing it from management is a good foundation to build a methodology on.

                                                                                                                  You’d rather have the variable worker who has some home-run weeks and some false starts than the reliably mediocre one.

                                                                                                                  That’s management’s choice to make. Either way, they should have the information they need to make that choice.

                                                                                                                  There is, it turns out, an intrinsic positive correlation between the variability of work (or risk) and the value rendered.

                                                                                                                  That’s the opposite of my experience. Doing the simplest, easiest, most obvious thing at every step turns out to be a lot more effective.

                                                                                                                  The Scrum/Agile mess prevented people from working on what was actually important to the business

                                                                                                                  Then something went seriously, badly wrong, because the whole point of Agile is that you demonstrate business value every two weeks.

                                                                                                                  I’ve seen non-Agile destroy a business. Indeed I’ve seen a direct correlation between how Agile the places I worked were, and how successful the business was.

                                                                                                                  the “Scrum” is project-specific and therefore time-limited: getting the project done (actually done, not “done” in some corporate sense) and scoring the client and getting a retainer means that people can relax.

                                                                                                                  Most non-consulting businesses are still in the business of landing a small number of big contracts with clients. Delivering a big feature to your client and then relaxing a bit after is a cadence that exists in most tech companies. Even for a direct-to-consumer business it’s there to some extent.

                                                                                                                  No one’s going to put his career goals on hold and work through of a bunch of feature-level tickets for months, just for an average salary or for 0.05% in equity.

                                                                                                                  Again these “career goals”. I just don’t understand it. Working through feature-level tickets is how you produce value and how your company stays in business. Almost all non-feature-level-ticket roles add less value to the business, or even subtract value. That’s not a nice thing to want from your career even if your employer makes it an option.

                                                                                                                  1. 7

                                                                                                                    I think it’s important to ask why Agile is so easy to screw up. We don’t want fragile software applications and certainly shouldn’t want fragile software methodologies either.

                                                                                                                    1. 2

                                                                                                                      imho if agile fails, it usually is because people applied some Agile workflow (like Scrum) while losing orientation of the goals these methods want to achieve. For example:

                                                                                                                      • User stories are unclear / badly written. This is probably a very overlooked problem, but I have seen it many times. Sometimes the wording is not understandable at all, or no acceptance criteria are defined, or they spell out the technical realization instead of the functional requirement, leaving no room tfor engineers to find a better solution / architecture for the change. The solution is to have good refinement meetings were devs read and challenge stories, ask questions, get the PO to remove implementation advice but add intent of functionality and demand proper acceptance criteria.
                                                                                                                      • Backlog is prioritized sloppily by the PO, this is especially harmful if it is done in a way that delays an initial break through of getting a prototype up and running.
                                                                                                                      • Confusing “Agile” for only planning the next two weeks. Doing scrum does not mean that you shouldn’t think about a roadmap or plan for the next year. However, it is important to plan on a much coarser level for long -term planning, and not stick to plans that are obsoleted by reality.
                                                                                                                      • Lousy estimation. Many people think estimation is about planning, and that is partially right. However, estimation is also an indicator for whether a story is well-written. If 2 devs estimate a different workload for a story, chances are good that the story is not clear on what should be done.
                                                                                                                      1. 2

                                                                                                                        Is Agile more or less easy to screw up than any other methodology? It could be that all methodologies are hard to follow; I remember reading that the most development popular process in practice (at 60% of those surveyed) was “no process”.

                                                                                                                        IME Agile mostly gets screwed up by not actually doing it - or by falling back to non-Agile as soon as anything goes slightly wrong. I suspect this is simply a case of people responding to their incentives; management is rewarded for claiming to adopt Agile, but has very little incentive to actually follow it.

                                                                                                                        1. 1

                                                                                                                          Probably more interestingly is what would be the alternative to Agile?

                                                                                                                          1. 3

                                                                                                                            Spiral, Chaos, Unified Method, OOSE, Rational Unified Process… there are few, and while some of them (looking at you RUP) are widely known to be terribad I don’t think all of the others can be dismissed out of hand. One of the challenges is that research into it kinda stopped (TtBoMK) after Agile went big.

                                                                                                                            1. 1

                                                                                                                              When I read comments criticising Agile, I often feel that the commenters would rather like to work on their own plans, i.e. with a goal in mind, start implementing and provide a solution. So it would probably be unfair to dismiss all of these attempts as chaos, but from what I have seen it requires above-average people to work. If it works, the work shows all signs of Agility (prioritization, delivering increments, working software over documentation, people over processes, etc.).

                                                                                                                              I have also seen truly chaotic development that was a pure waste. Also not pretty, and every time I read a critic of Agile, I worry that people would rather work this way.

                                                                                                                              1. 2

                                                                                                                                Chaos is a method inspired mathematical chaotic systems, not social chaos.

                                                                                                                                1. 1

                                                                                                                                  ah, thanks for that hint.

                                                                                                                                  To summarize it seems that Spiral and chaos could be applied in any agile workflow as a method to prioritize backlogs, they seem to operate a bit on other levels than RUP/Scrum/Kanban.

                                                                                                                        1. 2

                                                                                                                          You never make a case for what is the standard. Is your contention that there should be no such thing as a “standard” and that one should consider all probable browsers? What does that look like in practice? In my annual roadmap for 2018 major development projects, do I ignore PWA because it’s what Chrome is pushing or do I find a way of giving it 59% consideration?

                                                                                                                          1. 20

                                                                                                                            Web standards are a thing that have existed for years.

                                                                                                                            Unfortunately web developers jumping on the band wagon for “we only support X” is also a thing that has existed for years.

                                                                                                                            1. 5

                                                                                                                              I would contend that browser implementations have driven web innovation a lot more effectively than web standards agencies. Standards agencies gave us the wrong box model, the never-implemented CSS2, the dead end of XHTML. The web only started innovating again with the WhatWG takeover, which was effectively a coup where browser makers displaced the standards agency.

                                                                                                                              Browsers implement features, websites use them, they get standardised once they’ve proven themselves in practice rather than before. That’s the model that works, and using new features that chrome (or anyone else) has implemented, as and when those new features are useful to you as a web designer, is part of that.

                                                                                                                              1. 6

                                                                                                                                Such questions/statements are weird. Do you, for example, give Firefox the same consideration as Chrome for German customers? http://gs.statcounter.com/browser-market-share/desktop/germany

                                                                                                                                Note: NetMarketShare only gives you global statistics unpaid and hides others behind a paywall. I assume most companies don’t pay for that and don’t do proper research of their actual target audience.

                                                                                                                                The question is a rather broad one: Do we, as an industry, want to support one of the biggest and most nosy software companies in taking over one of the crown jewels of the free web? The users client?

                                                                                                                                Yes, that’s a hard question to answer day to day, when features have to be implemented and budgets are thin. It still has to be answered.

                                                                                                                                We have more control over the situation then it might seem. This is how Firefox won the browser war back then: users recommending other users not to use the monopoly browser. Yes, you can totally ignore what Chrome is pushing for and deliver a great product.

                                                                                                                                1. 0

                                                                                                                                  I don’t think Firefox ever won the browser war; at its peak it still had significantly lower marketshare than IE.

                                                                                                                                  1. 11

                                                                                                                                    Firefox never aimed for dominance, but for breaking dominance. Winning is not “getting to the highest market share”.

                                                                                                                                    Firefox also had multiple target markets where it was the dominant browser for a couple of years.

                                                                                                                                    This whole idea that you have to be on slot 1 in a market with multiple billions of users to be winning is absurd.

                                                                                                                                    1. -2

                                                                                                                                      Sounds like you are trying to redefine win to mean succeed.

                                                                                                                                      1. 5

                                                                                                                                        No.