1. 3

    Compiled languages definitely get an advantage out of strong typing and concrete API statement, because the planning of resource utilization allows many execution strategies, data layout, and even caching on code hoisting to be exploited. The more the desire to maximally “use” the hardware architecture, the more these grow to “fit” the hardware.

    At the same time dynamic languages/JITs are getting better to fit the abstract expression of the programmer - functional programming can express compactly/accurately/clearly very elaborate programs, irrespective of the intermediate data types/APIs used in constructing them. The idea is to “fit” the nature of abstractions being manipulated rather than the nature of how they are executed.

    I’m currently debugging a symbolic configuration mechanism that was prototyped in a week in a dynamic language, but is meant to function in an embedded OS with a very low-level language, as part of a bootstrap. It is taking months to finish, mostly due to adapting the code to work in such a programming environment - you alter the assemblage of primitives to build enough of a virtual machine to handle the semantics of necessary symbol processing. An oddball case, but it’s an example of the two (the virtue of this is that it allows enough “adaptability” at the low-level that you don’t need to drag along the entire dynamic programming environment to serve huge amounts of low-level code that otherwise fits the compiled model perfectly.

    1. 2

      Compiled/interpreted and strongly/weakly typed have little to do with each other. Ditto for low/high level: Swift compiles to machine code but good luck maintaining any cache locality with its collections.

      1. 1

        Depends on application. And yes we don’t have a good model for cache locality. How much can we get vs complexity to code/maintain?

      2. 1

        Can you elaborate on the strengths of the dynamic language that allowed you to prototype it so quickly? The difference in development time stated here is really striking.

        1. 1

          Sure. First about the problem - “how do you configure unordered modules while discovering the graph of how they are connected?”. The problem requires multilevel introspection of constructed objects with “temporary” graph assignments in multilevel discovery phase, then successive top down constructor phase with exception feedback.

          The symbolic “middle layer” to support this was trivial to write in a language like Python using coroutines/iterators, and one could refactor the topological exception handling mechanism to deal with the corner cases quickly, by use of the annotation methods to handle the cases. So the problem didn’t “fight” the implementation.

          While with the lower level compiled language, too much needed to be rewritten each time to deal with an artifact, so in effect the data types and internal API changed to compensate to fit the low-level model. Also, it was too easy to introduce boundary condition “new” errors each time, while the former’s more compact representation that didn’t thrash so much didn’t have this.

          Sometimes with low level code, you almost need an expert system to maintain it.

      1. 6

        I changed my mind about how old UNIX is after reading The Art of Unix Programming by Eric Raymond. I don’t want structured binary file formats. Strings, please.

        1. 2

          Always a little wary when this is the first line:

          At Microsoft, the core of our vision is “Any Developer, Any App, Any Platform”
          

          Orlly?

          But I’m interested…

          1. 4

            Maybe “all platforms” means all versions of Windows, most OS X, and latest Debian stable.

            1. 6

              To be fair, that puts them miles ahead of almost everybody else working on platforms these days :(

              All the action right now seems to be on “JS tool of the week” and “build for Android and iOS with one codebase”

            2. 3

              I sincerely don’t understand this hate against Microsoft.

              Yeah sure, Ballmer days sucked and they really screwed up. But since Natya picked up the role, there seems to have been quite a shift in the company’s mindset. Plus all the OS things they’ve been doing in the past few years.

              1. 1

                That’s a description of what they want to extend and eventually extinguish.

                1. 1

                  That view is a little out of date, don’t you think? What have they tried to extinguish recently?

              1. 17

                Another fun one:

                λ> let 2 + 2 = 5 in 2 + 2
                5
                

                (Plus a very stern non-exhaustiveness warning.)

                1. 12

                  Is that a redefinition of infix + defined only on a left and right argument of 2? Terrifying.

                  1. 5

                    It’s slightly less weird when you consider that operators in Haskell are just syntactic sugar for functions, and functions can be partially defined using pattern-matching on the LHS of the definition. For example, this trivial function returns True when its argument is 0, and False otherwise. You could of course trivially define it in one body too, but you can also use pattern-matching on arguments like this:

                    isZero 0 = True
                    isZero _ = False
                    

                    So you can partially shadow a function locally by defining it with a pattern-matching head: anything that matches the local definition will execute that, and anything that doesn’t continues to search bindings further out in scope.

                    1. 2

                      Yep! That’s exactly it. :)

                  1. 7

                    The merits of minimalism aside, brutalism was a blight on architecture for decades, a cult of ugliness, and produced buildings which still ruin cities to this day. The examples of brutalism inspired web design in the article are highly aesthetic by comparison.

                    Also, I appreciated the Nine Inch Nails reference.

                    1. 4

                      This article was posted on lobste.rs a while ago and it really opened my eyes to the point you’ve stated about brutalism. Here was the discussion around it.

                      1. 4

                        I dunno. I find brutalist architecture quite aesthetically pleasing.

                        1. 5

                          Brutalist buildings in good repair are treasures.

                          1. 4

                            By what measure? Taste/distaste for brutalist architecture is highly opinionated in my experience.

                            1. 1

                              By the measure of my subjective experience, of course :)

                              1. 2

                                At least you’re honest about it. :)

                        1. 1

                          Wow, I didn’t know you could do that.

                          1. 4

                            hey guys, it’s April 2nd turn it back now.

                            1. 5

                              It’s not a joke. It’s like this forever.

                              1. 9

                                It’s not a joke. It’s like this forever.

                                If you want a picture of the future, imagine an HTML table stamping on a human face — forever.

                                In seriousness, we’re down to the last couple minutes on this gag. I’m taking some screenshots and then cleaning up and resetting the server over the next hour or so.

                            1. 2

                              Yes please. As an underrepresented programming language and paradigm, I have found many of these posts mind expanding and would have loved to browse by tag.

                              1. 14

                                I wouldn’t call defer a “very elegant solution” when RAII exists :)

                                1. 7

                                  The problem for RAII is that it needs to be in a class destructor. Defer can just happen by writing a line of free code.

                                  1. 7

                                    Except RAII can handle the case where ownership is transferred to some other function or variable. Also, it scales well to nested resources, whereas figuring out which of any structs in a given C library require a (special) cleanup call is depends entirely on careful reading of the relevant documentation. If RAII was just about closing file handles at the end of the function, few people would care.

                                    1. 2

                                      Except RAII can handle the case where ownership is transferred to some other function or variable.

                                      Does that matter for languages that have GC?

                                      1. 7

                                        RAII is not exclusive to memory management. The Resource in RAII can be aquired memory, but it can also equally be an open file-descriptor, socket or any other resource for that matter, that GC won’t collect.

                                      2. 1

                                        I think the ideal solution would be to be able to use class destructors for some things, but also be able to add a block to the “destruction” of a specific instance.

                                    2. 3

                                      Doesn’t RAII sort of hide the cleanup from your actual code? I imagine that can work only if one can trust that every library you ever use behaves well in this manner. Then again, I guess an explicitly called cleanup routine may be of poor quality as well.

                                      1. 8

                                        That’s the point. Cleanup is automatic, deterministic, invisible. You can’t forget it, while you definitely can forget a defer something.close().

                                        Every library in Rust does behave like this, and I guess pretty much every library in C++ (that you would actually want to use) does as well.

                                      2. 3

                                        Excellent point! Now it feels only slightly more elegant than goto :)

                                      1. 1

                                        I’ve enjoyed working on it because many of the decisions were made automatically, by observing any issues and then simply following the types, as I’ve tried to illustrate here.

                                        And it was a joy reading this for exactly that reason. It’s always a pleasure when people manage to turn application develoment into a logical, deductive process.

                                        1. 15

                                          Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!

                                          Oh simpler times when we only had 7 new technologies in the last 12 months. Also after I read that I realized this was published in 2001 and it suddenly made a lot more sense.

                                          All they’ll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other’s stories: “Peer To Peer: Dead!”

                                          s/peer-to-peer/blockchain/g, this may have been from 2001 but it’s still so relevant

                                          1. 2

                                            What are the 2018 equivalents? Obviously Blockchain: Is there anything else that has that ‘new hotness’ quality which makes it irresistible to neophiles?

                                            1. 8

                                              IOT, AI/ML, Serverless and of course: microservices

                                              1. 3

                                                Oo yes. Docker et al definitely qualify.

                                                1. 2

                                                  I forgot the most important one: kubernetes

                                              2. 1

                                                Also of interest is the converse: what are the things that have recently lost (or are in the process of losing) this quality?

                                                1. 3

                                                  Peer to peer.

                                                  1. 3

                                                    recently… :)

                                                  2. 2

                                                    I’m hearing less about big data and nosql

                                                    1. 1

                                                      Bigdata has folded into AI/ML or just analytics

                                                      1. 2

                                                        On top of it, we have a new fad of stronger-consistency DB’s with SQL layers. One of few fads I like, too. I hope they design even more. :)

                                              1. 1

                                                Neat idea. I briefly toyed with the idea of developing blogs or other static content with minimal processing on Lambda. To be clear ahead of time, I don’t know anything about Lambda past what descriptions I’ve seen in blog posts and such. Do tell me if any of this is impossible on Lambda or a Lambda-like service (eg future competitor or homebrew).

                                                In the thought experiment, I’d use a combination of a VM/bare-metal machine (“the Machine”) with Lambda service (“the Service”). The Service would soak up most of the traffic running any computation that could be done right at that point. Aside from forwarding results in simpler form, it might also sync up with the Machine if that sync function itself could happen quickly within its execution window. The monitoring of Service or data for integrity/security would happen on the Machine. If Service runs something like CRDT’s, the Machine might also issue fixes for data to active nodes in Service. That might not be necessary if Machine has a copy of data. If the Machine can upload to the Service, then I could also have the Machine create new, static snapshots of a live blog or some other up-to-date data that it embeds in images for the Service that it uploads. In that way, the Service can also be a data cache for the Machine.

                                                It seemed like you were trying to do everything on Lambda. Did you investigate split architectures like that for databases?

                                                1. 2

                                                  Check out Serverless, a framework for strapping together elastic Amazon services

                                                1. 1

                                                  Several years ago, this article might have been a few thousand words. There’d be tables and charts. They’d reference academic studies and correlate the data with something like unemployment.

                                                  Graphs aren’t that new, bub…

                                                  1. 6

                                                    I thought it was the perfect length. Rage against overwrought blog posts!

                                                    1. 4

                                                      Empirical Software Engineering: I followed a course at my university on this. It was an eye opener. I can publish some of my reviews and summaries sometime. Let me already give you some basic ideas:

                                                      1. Use sensible data. Not just SLOC to measure “productivity,” but also budget estimates, man-hours per month, commit logs, bug reports, as much stuff you can find.
                                                      2. Use basic research tools. Check whether your data makes sense and filter outliers. Gather groups of similar projects and compare them.
                                                      3. Use known benchmarks in your advantage. What are known failure rates, how can these be influenced? When is a project even considered a success?
                                                      4. Know about “time compression” and “operational cost tsunamis”: these are phenomena such as an increase in the total cost by “death march” projects, and how operational costs are incurred already during development.
                                                      5. Know about quality of estimates, and estimates of quality and how these can improve over time. Estimates of the kind “this is what my boss wants to hear” are harmful. Being honest about total costs allows you to manage expectations: some ideas (lets build a fault-tolerant and distributed, scalable and adaptable X) are more expensive than others (use PHP and MySQL to build simple prototype for X).
                                                      6. Work together with others in business. Why does another department need X? What is the reason for deadline X? What can we deliver and how to decrease cost, so we can develop oportunity X or research Y instead?
                                                      7. Optimize on the portfolio level. Why does my organization have eight different cloud providers? Why does every department build its own tools to do X? What are strategic decisions and what are operational decisions? How can I convince others of doing X instead? What is organizational knowledge? What are the risks involved when merging projects?

                                                      Finally, I became convinced that for most organizations software development is a huge liability. But, as a famous theoretical computer scientist said back in the days: we have to overcome this huge barrier, because without software some things are simply impossible to do. So keep making the trade off: how much are you willing to lose, with the slight chance of high rewards?

                                                      1. 2

                                                        Any books you want to recommend?

                                                        1. 1

                                                          I’m also interested in that but online articles instead of books. The very nature of empiricism is to keep looking for more resources or angles as new work might catch what others miss. Might as well apply it to itself in terms of what methods/practices to use for empirical investigation. Let get meta with it. :)

                                                      1. 2

                                                        Maybe another way to think about this is “Can I not do FP in my language?”. Yes for JavaScript and Scala and Rust - you can write procedural code to your heart’s content in these languages, even if JavaScript gives you the tools to use functional abstractions and Scala and Rust actively encourage them. No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                                        1. 9

                                                          No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                                          main = do
                                                            putStrLn "What is your name?"
                                                            name <- getStr
                                                            putStrLn $ "Hello, " ++ name
                                                          
                                                          1. 5

                                                            No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

                                                            What do you mean by “looks imperative”? Doing everything inside the IO monad is not much different from writing a program in an imperative language.

                                                            1. 2

                                                              You mean StateT and IO. And then learning how to use both.

                                                            2. 3

                                                              Writing Haskell at my day job, I’ve seen my fair share of Fortran written in it. The language is expressive enough to host any design pathology you throw at it. No language will save you from yourself.

                                                            1. 5

                                                              Although I haven’t coded in it, I found this language is interesting in a lot of ways. Drawing on the traits of languages in the title already gives it a ton of potential if done right. It does have a REPL already. It’s key components started at under a thousand lines of code each. It was written in Nim, leverages it for GC + concurrency, and can use either Nim or C for performance reasons. I was strongly considering writing a LISP or Smalltalk variant in Nim for past few months for similar reasons. The output would be very different since I want to leverage various analyses for safety-critical tooling. The common thinking seems to have been Nim as a strong, base language for macros, portability, and performance. As in Spry, one also needs something to drop down to when new language isn’t cutting it for whatever reason.

                                                              It’s going on the Bootstrapping page since it might be used for or inspire something along those lines. :)

                                                              1. 2

                                                                Nick, you like Nim? I didn’t know that!

                                                                1. 5

                                                                  Well, I just looked at its features and example code. What I saw was a language that had improvements in readability, power, and safety over C-style systems languages that also outputs C. That’s definitely nice. It has potential both on its own and as a point in design space of what to do next.

                                                                  My personal favorites were always PreScheme and Modula-3 for right balance of power, compile speed, runtime speed, relative ease of machine analysis, and implementation simplicity. You will rarely if ever see that combo since the simple implememtations almost always trade off power, performance, or comparable safety. The complex ones hard to compile or analyze.

                                                                  Before Rust, Julia, and Nim, my recommendation was to embed a version of Modula-3 in PreScheme with its macros and malleability to boost power. However, the syntax, default includes, and output would all be C to just drop it in the ecosystem. Add in Ada or Cyclone style safe-by-default properties. Although I never tried to build it, Ive gotten to see pieces of it form in other languages: Rust learned from Cyclone; Julia was a LISP internally that made including C effortless; Nim had Pythonic style abd macros compiling to C.

                                                                  Still nothing has all traits of PreScheme and Modula-3 but I keep looking for anything close that might be sub/super-setted into such a language. I had even considered embedding Rust or Nim into Racket with its macros and IDE but it may be too much mismatch. Still hope, though, that system programming can get leaps above C/C++ with some increment better than Rust, D, Ada, or Nim in balancing prior goals.

                                                                  Hope all that makes sense. Also, such a language woukd make my Brute Force Assurance concept easier that combines all the verification tooling of many languages for one. I think I already wrote about it here but not sure.

                                                              1. 39

                                                                Perhaps build systems should not rely on URLs pointing to the same thing to do a build? I don’t see Github as being at fault here, it was not designed to provide deterministic build dependencies.

                                                                1. 13

                                                                  Right, GitHub isn’t a dependency management system. Meanwhile, Git provides very few guarantees regarding preserving history in a repository. If you are going to build a dependency management system on top of GitHub, at the very least use commit hashes or tags explicitly to pin the artifacts you’re pulling. It won’t solve the problem of them being deleted, but at least you’ll know that something changed from under you. Also, you really should have a local mirror of artifacts that you control for any serious development.

                                                                  1. 6

                                                                    I think the Go build system issue is a secondary concern.

                                                                    This same problem would impact existing git checkouts just as much, no? If a user and a repository disappear, and someone had a working checkout from said repository of master:HEAD, they could “silently” recreate the account and reconstruct the repository with the master branch from their checkout… then do whatever they want with the code moving forward. A user doing a git pull to fetch the latest master, may never notice anything changed.

                                                                    This seems like a non-imaginary problem to me.

                                                                    1. 11

                                                                      I sign my git commits with my GPG key, if you trust my GPG key and verify it before using the code you pulled - that would save you from using code from a party you do not trust.

                                                                      I think the trend of tools pulling code directly from Github at build time is the problem. Vendor your build dependencies, verify signatures etc. This specific issue should not be blamed directly on Github alone.

                                                                      1. 3

                                                                        Doesn’t that assume that the GitHub repository owner is also the (only) committer? It’s unlikely that I will be in a position to trust (except blindly) the GPG key of every committer to a reasonably large project.

                                                                        If I successfully path-squat a well-known GitHub URL, I can put the original Git repo there, complete with GPG-signed commits by the original authors, but it only takes a single additional commit (which I could also GPG-sign, of course) by the attacker (me) to introduce a backdoor. Does anyone really check that there are no new committers every time they pull changes?

                                                                        1. 3

                                                                          Tags can be GPG signed. This proves all that all commits before the tag is what the person signed. That way you only need to check the people assigned to signing the tagged releases.

                                                                    2. [Comment removed by author]

                                                                      1. 2

                                                                        Seriously, if only GitHub would get their act together and switch to https, this whole issue wouldn’t have happened!

                                                                        1. 4

                                                                          I must have written this post drunk.

                                                                    1. 1

                                                                      What a wonderful and provocative article.

                                                                      1. 3

                                                                        Praise modular implicits!