1. 1

    Great footnote:

    It is interesting to note that most of the central personalities first met through an unofficial reading group formed by an enthusiastic amateur named Mervin Pragnell, who recruited people he found reading about topics like logic at bookstores or libraries. The group included Strachey, Landin, Rod Burstall, and Milner, and they would read about topics like combinatory logic, category theory, and Markov algorithms at a borrowed seminar room at Birkbeck College, London. All were self-taught amateurs, although Burstall would later get a PhD in operations research at Birmingham University.

    Also worth mentioning that Donald Michie, who worked with Turing at Bletchley Park, was responsible for bringing several of these people together in Edinburgh.

    1. 2

      I found it fascinating in itself that HOPL is so infrequent:

      Past conferences were held in 1978, 1993, and 2007. The fourth conference was originally intended to take place in June 2020, but has been postponed. [1]

      It’s a twice-in-a-career opportunity.

      [1] https://en.wikipedia.org/wiki/History_of_Programming_Languages

      1. 1

        It doesn’t sound so bad yet. According to the SIGPLAN website:

        We are working with SIGPLAN to identify a new time and place for the physical HOPL meeting, probably in the first half of 2021.

      1. 3

        Oh no, a data flow language in 3D? LabVIEW is already bad enough in 2D!

        In all seriousness though this is a neat idea. I currently don’t see VR improving much upon existing development workflows in any meaningful way, mostly because it’s hard to read small text. A language + environment designed with VR in mind could be quite interesting though.

        Edit: On taking another look at this, I noticed this paper was written in ’96!

        1. 1

          Yeah the VR thread feels pretty stale, but beyond the funny picture there is a really nice combination of good things in here:

          • “distinguish between logical disjunctions and conjunctions, and between sum and product types”
          • “based on a higher-order form of Horn logic”, “can be passed as arguments to other predicates”
          • “static polymorphic type system, and uses the Hindley-Milner algorithm to perform type inference”

          If all of that could be made performant, it’s pretty much the language of my dreams.

        1. 4

          And this, kids, is why “polyglot programming” is the biggest scam perpetrated on the American public since the invention of the carpet sweeper!

          Learning a language to true proficiency – which includes everything listed here and more – is hard, takes a lot of time, and should be a pre-requisite for writing production code.

          1. 5

            should be a pre-requisite for writing production code

            I’d like to go on a little tangent here: I think we all know that the only way to reach that level of proficiency… is to write production code. All companies with predictable cash flow should get very serious about hiring and mentoring junior programmers. When I’m in a hiring position, I never make knowledge of our precise tech stack the main criterion.

            If we don’t take mentoring seriously, there is no way to get the quality programmers we all need. We can’t just leave it up to others to teach and them pick the fruit when it’s ripe.

            1. 3

              Full ack. Jumping in at the deep end and then having people around you show you the ropes, with good code reviews is the best way to learn. That also fixes the paradox of learning a language at the same time as working on production code; the experienced team members will be able to explain where you hit subtle pitfalls of the language, where you’re not using the language in an idiomatic way and so on, which protects you from actually pushing such mistakes into production.

              1. 2

                I don’t disagree with this. I’d just say if that’s what’s going on, you should have a good mentoring and code review system.

              2. 1

                I sort of agree, but then I could not have learned about concepts that helped me significantly to write cleaner code. Let’s say my default language is Python and I do most of my work in that. Learning Clojure and Haskell was amazing, even though I cannot use them in production at all, let alone mastering them (obviously without production experience). I think there is large amount of people writing Python or Java (two trendy languages) who are not aware of functional language concepts that are very obvious to people who are coming from Clojure or Haskell. Funnily enough, Java 8 introduced a lot of these concepts, and now the people who 5 years ago were convinced that a for loop is the best for iteration are producly claiming that map / reduce is the best. Earlier in my career, one of my friends, who has a background in CS, explained to me that people think about the CS concepts based on which language they use daily or used for studying programming. This is the reason that universities teach (or at least used to teach) programming with SML, Scheme, etc.

                1. 1

                  Oh I’m not saying you shouldn’t learn many languages… as an individual programmer, you should. But you should also be honest about your level of proficiency in each, and learn at least some deeply.

                  I’m attacking the idea that encourages a team of software developers to simultaneously do work in multiple languages because…. something, something, “right tool for the job”, something, “hey, i can learn that over the weekend!”

                  Sometimes it can’t be avoided, of course. But the idea that you’d choose this path voluntarily, assuming almost 0 cost to each added language, is the insane part.

              1. 1

                Paywalled. I don’t suppose you have an ‘institution-independent’ download link anywhere?

                1. 1

                  Heh that’s weird, I was actually able to get a timestamped, non-shareable pdf from that link, but here’s a better one: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.673.5718&rep=rep1&type=pdf

                1. 5

                  This article is from 2014. If one continues to read through the author’s journey, then they eventually discover map and others in Swift. They also explain their wonder at Haskell.

                  As usual, “functional” is being used here to signify Lisps and MLs; it sounds like, in particular, the author expects a functional programming language to be like Haskell.

                  Objects and functions have a well-known sort of duality. We can formally model objects as state transformers; they are pure, but become impure when embedded in a particular stateful context. The rest is window dressing. For example, linked lists can be encoded as objects, and the encoding is the same as in traditional C exercises handed to undergraduate students, with a structure holding a pair of pointers, giving the O(1) behavior that the author craves.

                  To some extent, the author’s complaint is about the prelude chosen by the language designers. Like with many other languages, the lowest layers of Swift’s user-facing APIs are built out of Swift itself, and so the degree to which Swift feels “functional” is controlled by the choice of prelude. I don’t work with Swift myself, but under a minute of searching led me to alternative functional Swift preludes.

                  As a final note, the author’s criteria don’t really separate functional programming from, say, relational logic programming. Both paradigms orient around immutable built-from-scratch data structures and arrows which send values to other values; the main difference is the choice of category (Cartesian closed vs. compact closed) but it leads to a difference in behavior and experience.

                  1. 1

                    Good point about the prelude, but on the other hand it makes little sense to separate a language from its ecosystem, especially a language like Swift that was created to fill a very specific niche. For me, it was the nature of the standard data structures that made me feel it’s an uphill struggle to try to stick to the functional idiom in Swift, although some of the other pieces are there. Glancing at the prelude you linked to, it seems they are trying to address that which is great. If it can co-exist with the standard prelude, I might even try it out sometime.

                    Your last point is very interesting to me - to be fair, virtually no-one takes an interest in the similarities and differences between functional and relational programming. Do you have a link for the category theoretic characterization?

                    1. 1

                      I don’t have a single really good link. The closest I have is this entrypoint to nLab, but that table doesn’t show allegories. The main idea, informally, is that we can lift functions to relations, but we cannot always find a function which expresses a given relation. So, we can lift all of functional programming into relational programming, and we gain the ability to run in reverse, but lose the ability to have exactly one answer. There is more to discover formally; I have yet to figure out how to apply it to programming, but Set and Rel together form a very nice double category, and perhaps this doubling-up of heterogenous categories is the right explanation for how they are different and yet related.

                      1. 1

                        Thanks, interesting stuff! I posted another paper that you might find interesting (although it doesn’t concern functional programming): https://lobste.rs/s/zvum8i/from_conventional_institution

                  1. 5

                    I came to functional programming via Haskell, so I tend to think that if your core language includes unmediated side effects and mutable types then it’s not functional, but there’s a spectrum. A lot of people regard ML as a functional language, because it encourages you to write in a functional style, even though it happily accepts side-effecting imperative code. I have a lot of sympathy with that view.

                    I think this problem started with the conflation of ‘has higher-order functions’ and ‘is functional’. A decade or so ago there were a load of people arguing that Python was a functional language because it had higher-order functions and map. Those are certainly things that exist in a functional language, but they’re also things that existed in Smalltalk, which is the archetypal dynamic OO language.

                    Most of these arguments really boil down to trying to condense ‘X is a kind of Y’ and ‘Y is a good thing’ to ‘X is a good thing’. Why do people care if Python or Swift is a functional language? Because there are some things that they want to do that are easy in functional languages. Do these languages support those idioms? If so, they capture the part of functional programming that you want and that’s great. But if you want to communicate effectively then you need to say what you mean: Python and Swift (and C++ and Objective-C and Smalltalk and Ruby and…) make it easy to apply arbitrary transforms to a collection by defining the transform on an element.

                    The part I want from functional programming is deep immutability guaranteed by the type system, so I guess I have to keep looking…

                    1. 3

                      “Functional programming” cannot be defined in terms of language features. It has to be defined socially. How “functional” a language is can be defined by how many other groups of languages its users look down on and consider to be impure.

                      The base of all FP beliefs – the fundamental creed, if you will – is to declare that von Neumann was a heretic who blasphemed against the holy purity of computing, and renounce him and all his works. The most fundamental language level is probably McCarthy’s original vision of Lisp.

                      The next level up is to make heretics of languages and programmers at the base level. So Schemers, for example, adhere to the idea that the only proper computation is a tail recursive computation, and anathematize anyone who disagrees or who dares to use a non-Scheme Lisp. And the ML family sneers at apostates who have the gall to use untyped lambda calculus as the basis of their languages.

                      And from there you climb the hierarchy until you find a place where you’re content that everyone below you is a heretic but everyone above you is just a weird schismatic arguing about pointless doctrinal arcana. And this is fluid over time! Haskell folks now look down on a lot of other groups, but one day there will be another language whose adherents will sigh and point out that only a heretic or, at best, a hopelessly confused dilettante, would ever think Haskell of all things had any capacity for functional programming.

                      Once you shift your viewpoint to understand this, it starts making a lot more sense.

                      Also, I am absolutely not trolling or being satirical here – this is actually how I approach the question of how to define “functional programming”, and I find it’s a far more practical approach than any other I’ve seen proposed.

                      1. 1

                        Hey David, I used to follow the Étoilé project pretty closely. Nice to have you around! It would be great to read about your thoughts on the dynamic/static divide some time, since I know you’ve delved deep into both sides.

                        1. 2

                          Hi. The language that I’m working on now is Verona. We haven’t yet implemented any of the object model yet (we’ve been working on the concurrency model first, because getting that part of the abstract machine right is hard - I think we know roughly how to do object models well). Static vs dynamic means different things to different people and most languages are somewhere in the middle. Some slightly random thoughts on where I hope we’re going:

                          There’s been a lot of recent work on static type systems with structural and algebraic types (exemplified in Pony). The combination of these and type inference is very powerful. For example, in some made-up pseudocode, imagine writing:

                          var a;
                          if (x)
                          {
                              a = foo();
                          }
                          else
                          {
                              b = bar();
                          }
                          a.something();
                          

                          Now, if foo() returns a Foo and bar() returns a Bar, then the static type of a at the end will be Foo | Bar. Now if both Foo and Bar implement a something() method, then the call at the end will work. It’s up to the compiler to decide if the most efficient way of implementing that dynamic call. It may be to define an interface containing only the something() method and do an interface cast (which can be efficient if your runtime uses selector colouring), or just check the type of the return values and dispatching to a direct call based on that is fine.

                          A system like this feels a lot like a dynamic language, but has the implementation efficiency and static checking advantages of a static language. You can, for example, change the first line to:

                          var a : Foo | Bar;
                          

                          When you do this, you get static checking if you add another path that uses different types. This composes well with a separation where classes are concrete types, with no subtyping. You can potentially have subclassing, but if A is a subclass of B, that doesn’t mean A is a subtype of B. This means that you cannot assign an A pointer to a B* value, but you can have an interface type that describes the methods in B and will accept an A pointer or a B pointer (or anything compatible). This lets the caller decide whether to pay the cost of dynamic dispatch, not the callee. If you want to write code that is generic over A-like things, you can pay some cost, if you want to write code that is specialised to A, then you can do that.

                          This becomes more interesting in combination with generics. One of the many painful things in C++ is that you have two completely different syntaxes for writing code that handles different types. You can provide an interface (or, in C++ terminology, an abstract class) and do dynamic dispatch over it in a single version of a function, or you can create a templated function and have a version of your function generated for each type that it’s called with. These have different performance characteristics (and different ABI impacts), but you have to decide which one you want to do very early on. We should be able to write code with generic parameters and decide later whether to we want to do run-time dynamic dispatch or compile-time specialisation. Rust and C# have mechanisms for doing this.

                          I’m increasingly convinced that we don’t want run-time reflection in the language. We instead want a compile-time reflection mechanism that is sufficient for implementing a standard library reflection framework. Reflection is also tricky with modularity. I like Smalltalk / Objective-C reflection and have used them a great deal, but it’s incredibly annoying that you can use reflection to inspect a private field and now it’s part of the ABI that you depend on. It would be great to have uses of reflection part of the type system and part of the ABI contracts. If something chooses to expose itself to some uses of reflection across a module boundary, that’s a commitment to an ABI contract. If it doesn’t, then it should be opaque to reflection. Most of the uses of reflection I have are actually fine to drive from the reflected object. For example, if an object wants to support serialisation, it’s fine for it to import a serialisation adaptor that uses reflection and it can then expose an interface for multiple serialisers (and guarantee upgrade from prior serialised representations). If a program wants to use reflection to hook up things from a UI builder, it can expose some interfaces to dynamic reflection.

                          That brings me to two related things: Most languages today have no concept of an ABI at the source level. One of the reasons C remains popular is that C is not close to the metal, it’s close to the linker. C source-level constructs map very cleanly to things that are exposed as part of your module’s ABI (especially with compiler extensions for things like ELF visibility). It’s relatively easy to tell if a source-level change to a C program will change the public ABI (though only if you can see which structures escape at the boundary). For software engineering, I’m increasingly convinced that ‘public’ and ‘package’ are the only two visibility types that make sense: is this part of my public, guaranteed stable, ABI / API, or not? If not, then I may still want to break abstractions within my package and poke at that field, but I’m going to always rebuild that code at the same time, so I don’t care too much about the ABI and if I break the API then the person breaking it will see and fix build failures. I would love to have a language explicitly understand ABI versions and compatibility adaptors though, and have ABI guarantee violations be build failures for a package. If you release v1.0 and then you rebuild in a way that removes or changes the exposed ABI, you get a compile / link failure if you didn’t bump the version to 2.0 (or something like 2.unreleased).

                      1. 5

                        I think displaying the number of points at all is not a good idea. Just sort by points and be done with it. Perhaps some text descriptions such as “well received” or “poorly received” might be okay to get some feedback, but other than that I only see downsides and no real positives.

                        1. 3

                          I hadn’t thought about this before but as soon as I read it I realized I think this is a very good idea. It would be a relief to have a gathering place on the internet where nothing is quantified. Points could still be used to counter content marketers etc, but in an invisible fashion.

                          1. 1

                            Imageboards aren’t quantified directly quantified, but you still have a number of responses that say something about the post. This became a particular problem when the number of responses to any post were explicitly enumerated, because it incentives being provocative and soliciting the greatest reaction. The results weren’t that pretty.

                        1. 3

                          I was surprised Sinatra did so poorly on the benchmark. Does anyone here know why?

                          1. 1

                            Was thinking the same! Hoping someone has some pointers, I’ll probably do some digging later to find out why

                          1. 1

                            SQL + PL/pgSQL for core business logic, combined with PL/Python for things that get to unwieldy otherwise and for writing tests.

                            For adjacent services, Python if I need that huge ecosystem, otherwise I’d like to experiment with Haskell or F#. Or Julia.

                            For more complex logic that is hard to make composable in imperative or functional languages, I go for Answer Set Programming (clingo/clasp).

                            And for web frontend: Elm!

                            1. 11

                              Something about that spammer-spammer conversation makes my eyes hurt.

                              I couldn’t possibly violate an NDA, not about spam. But I could, in the most general terms, describe a spam filter I once implemented for a different site, a very successful filter. This spam filter did not deal with spam: It recognised some kinds of postings as not spam according to site-specific rules, and asked a human for all other postings.

                              The key to its success was that

                              • the site-specific rules were actually site-specific and not even nearly true in general. They were only true for the kinds of things talked about on that site. Translating to lobste.rs, “postings by @calvin are okay” might be a candidate, since it’s true within the context of lobste.rs, but does not apply to other calvins at other sites.
                              • spammers would have to really get to know the site’s audience to learn the filter’s rules.
                              • almost all postings were handled by the rules, so the humans weren’t worn out.
                              • spam was taken down a minute or three hours after being posted, not at once. Taking down spam at once made the spammers adapt.
                              1. 2

                                That last point seems like a low-effort, high-power action! (The rest of your description is also interesting.)

                              1. 25

                                I was reminded of something I lost in an edit, so I painted with a broader brush than I meant to. I don’t think all content marketing is awful and tried to caveat that in my opening paragraph. I don’t have a narrower term, though - maybe someone would like to paint this “know it when I see it” bikeshed. I wish I had a non-pejorative phrase to draw a line around the stuff we don’t want that was so clear that even its authors would accept the label.

                                We do get wonderful stuff that exists to promote businesses. Maybe my two favorite examples are jvns.ca who sells zines and hillelwayne.com who sells a book and corporate training. In chatting about this I have joked about individually naming them as exceptions to any rule, but I’m sure we can do better than that.

                                1. 16

                                  In chatting about this I have joked about individually naming them as exceptions to any rule, but I’m sure we can do better than that.

                                  While I’d appreciate being able to still share my stuff, I don’t want to be special-cased as an exception. If whatever we decide ends up preventing me from submitting, then it’d be unfair for me to sidestep that.

                                  (I know you mean it jokingly, but I’d still like to be on the record as saying this :) )

                                  1. 12

                                    But goal is never “enforcement of rule X”, it’s “stop content-marketing dbags”, which everybody in here would agree that you aren’t, whatever definition of that we carry around in our heads.

                                    1. 1

                                      There’s a tipping point somewhere where content, like yours Hillel, which is clearly written both to inform others & as an exercise in personal marketing becomes far too much of the latter & not enough of the former.

                                      Content which is interesting and informative to Lobsters readers should always be welcome here I think, whether it benefits a specific individual to have it posted or not. What we don’t want is an avalanche of low-value blogspam which contains no content that couldn’t be found in a higher quality form elsewhere & only exists because someone wrote it to market a product. (That product often being themselves.)

                                      1. 4

                                        Ideally we’d want to have zero false alarms or missed signal, but any sort of rule-based solution will either let in some marketers or block some legit posts. So far we’ve preferred missed signals, but are thinking of moving more away from that. I just think an exception-based filter is unfair as a way of avoiding false alarms here.

                                        1. 1

                                          That’s fair.

                                    2. 6

                                      The first person I thought about when I read the OP was @aphyr. I don’t know that he posts here to promote a business, but according to the criteria discussed here, his submissions history looks like a surefire way to get banned. (https://lobste.rs/newest/aphyr) And yet, aphyr is the furthest you could get from a content marketer or spammer! A large percentage of posts I enjoy the most would fall under the same pattern: people working hard at interesting stuff and posting about it regularly. It seems to me there is probably way to solve this (very real) problem without putting human moderators in the loop. The actual value of the material linked to the community should be paramount, surely?

                                      1. 3

                                        I think that having a vetting process where if people are going to bump up against that limit, it promotes them for mod review, and then they can be whitelisted, would be how I would go about it. We have quite a few domains that are primarily self-posted.

                                      1. 5

                                        Hey, I’m Garren, the speaker in this talk and on the CouchDB PMC. I hope you found the talk interesting. If you have any questions or are interested in getting involved leave a comment below or sign up to the couchdb mailing list - https://couchdb.apache.org/

                                        1. 1

                                          Haven’t watched the talk yet but I’m a CouchDB fan (used it pretty extensively a few years ago) and was also very intrigued with FoundationDB before it was bought by Apple so what can I say - interesting times ahead! I’ll be keeping a close eye on the project.

                                          1. 1

                                            Awesome. That is great to hear. I’ve been really impressed with FDB. It does have a bit of a learning curve initially since it is in some sense quite low level. But once I understood how the transactions work, it has been great to work with.

                                        1. 2

                                          What we really need are better databases.

                                          1. 3

                                            I feel that databases already stepped-up their game, but somehow people are not up to date with all the improvements. A lot of developers I meet have no clue how to optimize database and generally treat it as a black box. A lot of companies would rather hire someone with ReactJS experience, than DBA experience :)

                                            1. 2

                                              I obviously have no idea what you have in mind but I agree and am intrigued. (Even so, it’s interesting and instructive to see how the whole noSQL cycle went down.)

                                              1. 1

                                                What I mean is that I have spent some time with PostgreSQL’s views, triggers and row-level-security to glimpse a future where a lot of business logic gets encoded in a non-imperative way very close to the data. We are not there yet, though.

                                                It would be nice to be able to store the schema in a git repository and be able to statically check that all your views and procedures are compatible with each other. It would also be nice to have a tool to construct the best migration path from the current schema to the new one where you only instruct it on the missing bits (that is, how did the shape of data changed).

                                                I think that a tight type system and some good tooling might be able to combat the complexity much better than service oriented architecture that still needs a lot of attention on coordination and API stability. If a team changed their public views, they should immediately get a type error or a QuickCheck test suite should notify them that they broke something. They could share ownership and modify dependent code themselves more easily.

                                                1. 2

                                                  This is indeed the technical platform I introduced at my last job and am using for my current project!

                                                  It would be nice to be able to store the schema in a git repository

                                                  I’m using the excellent setup pioneered (?) by the PostgREST/Subzero project:

                                                  https://github.com/subzerocloud/subzero-cli

                                                  It’s very simple actually: build up your schema with idempotent SQL scripts split up into a regular text files according to your taste (you can place them in a hierarchical file structure that fits your software model). Just use \ir path/to/script.sql to run all the scripts in order from a top init.sql file. For example, from init.sql, call one script to set up db users, another to create schemas and set basic permissions on them, then call one script for each schema which in turn calls sub-scripts to set up tables, views, functions… All of this happens in regular text files, under version control. Reloading the entire db strucure + seed data takes about a second, so you can iterate quickly.

                                                  Now, the great thing that subzero-cli gives you is a way to turn the resulting schema into a migration (using the Sqitch stand-alone migration tool) by automatically diffing your current schema against the last checked in schema. (This involves a little dance of starting up ephemeral Docker containers and running a diffing tool, but you don’t really notice.) So you get a standard way of deploying this to your production system using simple migrations. (Sqitch is a pretty great tool in itself.)

                                                  be able to statically check that all your views and procedures are compatible with each other

                                                  Here you’ll have to rely on automated tests, like pgTAP or anything really that you prefer. Python is very well supported as an “in-database” language by Postgres and I’m working on writing tests using the wonderful Hypothesis library and run them directly inside Postgres to thoroughly test functions, views etc.

                                                  It would also be nice to have a tool to construct the best migration path from the current schema to the new one

                                                  Again, handled very well by subzero-cli, relying on apgdiff (apgdiff.com, yes it’s “old” but subzero maintain their own fork which gets small tweaks from what I’ve seen).

                                                  I obviously agree with the rest of what you wrote :) If you put PostgREST, PostGraphile, or Hasura on top of your “smart” postgres system, you can give teams quite a bit of flexibility and autonomy in structuring client-server communication for their use cases, while keeping the core logic locked down in a base schema.

                                                  1. 1

                                                    It would be nice to be able to store the schema in a git repository and be able to statically check that all your views and procedures are compatible with each other. It would also be nice to have a tool to construct the best migration path from the current schema to the new one where you only instruct it on the missing bits (that is, how did the shape of data changed).

                                                    Unless I misunderstand you, these tools already exist, at least for MySQL, PostgreSQL, and MSSQL. The compatibility checking does need to happen by deploying the schema, but the rest is there now.

                                                    1. 1

                                                      The compatibility checking does need to happen by deploying the schema, but the rest is there now.

                                                      I am pretty sure that checking of procedure bodies only happens when you run them.

                                                      Can you share links for the tools? I am not aware of them.

                                                      1. 2

                                                        Yeah, stored procs are not statically analyzed in any of the tools I know. SQL Server: https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/deploy-a-database-by-using-a-dac?view=sql-server-ver15 MySQL: https://www.skeema.io/ For Postgres I know I’ve seen one or two tools that functioned in this way but I don’t seem to have saved the link.

                                              1. 23

                                                The guideline I heard was “don’t think about microservices until you have more devops people than the average startup has employees.”

                                                Monoliths scale pretty dang far!

                                                1. 9

                                                  At my last job I often said my primary mission was to defer for the longest time possible the introduction of microservices and to defer for the longest time possible the introduction of deep learning.

                                                  1. 6

                                                    This lines up with my experience as well: microservice scaling is entirely organizational, not technological.

                                                    Eventually consistent writes, stale OLAP, and designing for the inevitable failure of half your application becomes paradise when the alternative is that even small feature delivery grinds to a halt because coordinating and testing changes eats up most of the cycles of your development teams.

                                                    1. 1

                                                      I agree that having devops figured out is critical, right. But even in startups, when done right, its nice to split up different things into different services.

                                                    1. 13

                                                      In my experience the highest velocity architecture is one monolith per team.

                                                      1. 5

                                                        Microservices should implement bounded contexts, which map to teams, so, yes: one microservice (monolith, service, whatever) per team is exactly right.

                                                        1. 9

                                                          … until that team gets reorged.

                                                          1. 6

                                                            This is a mood beyond words

                                                            1. 4

                                                              A microservice architecture is indeed a poor fit for an organization that’s still figuring out how to hew the borders of its bounded contexts.

                                                            2. 3

                                                              Something that seems to have gotten lost is that there was a reason to tack the “micro” onto the long-established term “service oriented”. Micro was supposed to be very different. A lot of talk about microservices nowadays seems to just mean “not-monolith”. I’m not generally very keen on microservices, but a lot of confusion seems to stem from mixing ideas from different approaches without understand their strengths and weaknesses. My current advice is to 1) achieve strong modularization within your monolith (so it becomes which modules could be turned into stand-alone services if need be, 2) extract a few services when there is a clear benefit, 3) forget about microservices until you have no other options.

                                                            3. 2

                                                              Not only highest velocity, but also lowest fragility. Since you essentially allocate responsibility onto a team, which is unlikely to quit all at once. It also lowers communication overhead to come extent, since you reliably have team leads + PMs + managers communicating without having to get everyone involved in the communication chain.

                                                            1. 2
                                                              • Banging my head against elm-graphql + PostGraphile
                                                              • Reading A Common-Sense Guide to Data Structures and Algorithms and Domain Modeling Made Functional
                                                              • Also slowly reading Data Model Patterns: Conventions of Thought, 90s book by David Hay
                                                              • Hopefully making some headway on a side project in the weekend
                                                              • Date night with my [common-law] wife tomorrow, rare evening without kids!
                                                              1. 3

                                                                If you want to go straight to the draft paper itself: https://arxiv.org/pdf/1911.02564.pdf

                                                                Abstract

                                                                A major determinant of the quality of software systems is the quality of their requirements, which should be both understandable and precise. Natural language, the most commonly used for writing requirements, helpsunderstandability, but lacks precision.

                                                                To achieve precision, researchers have for many years advocated the use of “formal” approaches to writing requirements. These efforts have produced many requirements methods and notations, which vary considerablyin their style, scope and applicability. The present survey discusses some of the principal approaches.

                                                                The analysis uses a number of complementary criteria, such as traceability support, level of abstraction and tool support. It classifies the surveyed techniques into five categories: general-purpose, natural-language-based,graph and automata, other mathematical notations, and programming-language-based). The review includesexamples from all of these categories, altogether 22 different methods, including for example SysML, Relax,Petri Nets, VDM, Eiffel, Event-B, Alloy.

                                                                The review discusses a number of important open questions, including the role of tools and education andhow to make industrial applications benefit more from the contributions of formal approaches.

                                                                1. 2

                                                                  If you create a bunch of types in typescript and then try to have the typechecker catch all possible error states so that it wouldn’t compile a runtime error (but maybe a functional bug would still happen), isn’t this Elm? You model out your problem and you lean heavily on the types of that model. Then you write functional/business-y tests?

                                                                  I’m not declaring this, I’m more like asking. :) I’ve done way more typescript than Elm but not years and years of either.

                                                                  1. 3

                                                                    This is indeed the idea behind type systems like Elm’s, and Elm takes the idea a bit further than comparable languages by being very restrictive about sources of non-determinism. So if you try to utilize types to their fullest, then I’d say you’re doing things in the spirit of Elm but you won’t be able to get the same guarantees from a language that wasn’t designed from the ground up to be able to make strong guarantees the way Elm was. And in any case, having a strong type checker doesn’t obviate the need for unit tests (or unit-level tests, whether they are classical example-based tests or not), although you’ll probably not need to write as many, since their are fewer degrees of freedom in the interactions between the units/modules in your system.

                                                                    1. 3

                                                                      isn’t this Elm?

                                                                      Hi! I’m the blog post author and yes this concept is heavily encouraged in Elm! I have a few other blog posts that go into more detail if you’re interested to learn more. With* Functions, Phantom Types, The Never Type, and Opaque Types are all related to being deliberate about modeling data and defining function / module interfaces.

                                                                      Also, Richard Feldman gave a good talk a few years ago you may find interesting - Making Impossible States Impossible

                                                                      I hope that helps!

                                                                    1. 3

                                                                      Funny… I have that book! I’ve skimmed through it a few times, and my impression of the language is that it’s very similar to Erlang, but with more Prolog.

                                                                      I figured the language was dead, but I’ll see if I can get this running.

                                                                      1. 3

                                                                        It should be straightforward to build, running multiple communicating nodes can be a bit of a hassle, though. I recommend giving the “strand” rc(1) script in strand-utils-1.tar.gz a try, as it makes this much easier. Any questions and suggestions are more than welcome!

                                                                        1. 1

                                                                          Back in the 90s when I first saw Erlang, I always thought it had some “surface similarity” with Prolog.

                                                                          1. 2

                                                                            The early versions of Erlang were indeed implemented on top of Prolog, and Joe Armstrong talked about his fascination with Prolog in more recent years as well. As I remember it, he said they gradually removed all backtracking/indeterminism until they realized they weren’t really doing Prolog programming anymore. Some of the syntax that looks very idiosyncratic in Erlang is carried over from Prolog.

                                                                            Robert Virding (another co-creator of Erland) has created a “Prolog for Erlang”[0].

                                                                            Term unification in Erlang and Elixir come straight from Prolog, a lot more restricted without backtracking but still very nice! And Joe described Elixir’s beloved pipe operator like this:

                                                                            This is the recessive monadic gene of Prolog. The gene was dominant in Prolog, recessive in Erlang (son-of-prolog) but re-emerged in Elixir (son-of-son-of-prolog).

                                                                            (He’s referring to DCGs in Prolog.)

                                                                            [0] https://github.com/rvirding/erlog [1] https://joearms.github.io/published/2013-05-31-a-week-with-elixir.html