1. 1

    How big is your production environment? Is it realistic and affordable to run a regularly updated “exact copy”?

    1. 1

      The big question: why not port something else like pony or cloud haskell? Or why not add static types to something like erlang itself?

      1. 14

        I’m not sure what you mean about Pony or Cloud Haskell but I have some answers to why not Erlang.

        Static typing Erlang is a different and considerably more difficult problem than making a compatible soundly typed language. Realistically for a typer for Erlang to get adoption it needs to be flexible enough to allow it to be applied incrementally to existing Erlang codebases, which means gradual typing. Gradual typing offers a very different set of guarantees to those that Gleam offer. By default it’s unsafe, which isn’t what I wanted.

        There’s also some more human issues such as my not being part of Ericsson, so I would have little to no ability to make use or adapt the existing compile, I would have to attempt to be compatible with their compiler. If we were successful in that the add-on nature means it becomes more of a battle to get it adopted in Erlang projects, something that I would say the officially supported Dialyzer has struggled to do.

        Overall I don’t believe it would be an achievable goal for me to add static types to Erlang, but making a new language is very achievable. It also gives us an opportunity to do things differently from Erlang- I love Erlang, but it certainly has some bits that I like less.

        1. 10

          I’m not sure what the point of Pony on BEAM would be. Pony’s reference capabilities statically prove that “unsafe things” are safe to do. BEAM wouldn’t allow you to take advantage of that. Simplest example: message passing would result in data copying. This doesn’t happen in the Pony runtime. You’d be better of using one of the languages that runs on BEAM including ones with static typing.

          1. 4

            Thanks for Pony and thanks for the reply!

            As you might imagine I’m very interested in how Pony types actors. I skimmed the documentation but many aspects such as any supervision trees were unclear to me (likely my fault). Are there any materials on the structuring Pony applications you could recommend? Thank you :)

            1. 5

              There are no supervision trees at this time in Pony. How you would go about doing that is an interesting open question.

              Because Pony is statically typed and you can’t have null references, the way you send a message to an actor is by holding a “tag” reference to that actor and calling behaviors on it. This has several advantages but, if you were try to do Erlang style “let it crash”, you run into difficulties.

              What does it mean to “let it crash”? You can’t invalidate the references that actors hold to this actor. You could perhaps “reset” the actors in some fashion but there’s nothing built into the Pony distribution to do that, nor are the mechanics something anyone has solid ideas about.

              There aren’t a lot of materials on structuring Pony applications I can recommend at this time. I’d suggest stopping by our Zulip and starting up a conversation in the beginner help stream: https://ponylang.zulipchat.com/#narrow/stream/189985-beginner-help

              1. 2

                Thank you

        1. 4

          AFAIK there’s still the issue with read-only access for public builds which I’d regard as a blocker for public project usage https://github.com/buildkite/feedback/issues/137

            1. 3

              I’m really glad you pointed this out as I’ve been planning on transitioning some of the Pony CI over to buildkite and it would have sucked when I found out about this.

              1. 3

                Apparently they’ve now fixed this, as per @j605’s link below, and the ticket has now been closed out as I’ve pointed this out to upstream.

            1. 4

              Is there a reason to consider this instead of something like 1Password or Bitwarden? I read the FAQ and it seems like it is the same except tied to Firefox.

              1. 6

                Dunno about their offerings, but what I personally like, about lockbox is that it’s got super friendly ux (caters for simplicity of use rather than us nerds :-)). I already use Firefox Sync on all my devices but the Android autofill API allows using it for apps too.

                Lockbox is also free of charge, free software and encrypts data on the client, not in the cloud.

                1. 4

                  If I follow correctly, you are saying its encrypted at the client and then sync’d to the cloud. Correct?

                  1. 4

                    Yeah

                2. 0

                  1Password or Bitwarden

                  Those are proprietary, and I would assume the Lockbox thing isn’t?

                  1Password is proprietary, bitwarden isn’t. oops.

                  1. 10

                    Bitwarden is open source:

                    https://github.com/bitwarden

                    1. 1

                      […] Bitwarden

                      Those are proprietary, […]

                      Nope.

                      1. 2

                        Well, ok, one of them is then.

                        1. 2

                          Have to say, my experience of Bitwarden has been nothing but positive. Much prefer to alternatives like lastpass

                  1. 12
                    1. 1

                      Us was really awesome. Y’all should go see it.

                      And I got that release done. So I guess all that is left is exercising and relaxing.

                      Nice.

                    1. 1

                      this is exciting project

                      but i am not touching until GCC is supported

                      https://github.com/ponylang/ponyc/issues/2079

                      1. 8

                        GCC is supported everywhere but Windows. We’d love it if you wanted to help with getting GCC working on Windows with Pony.

                        1. 3

                          Honestly I can and would like to help but I am pretty busy:

                          Http://GitHub.com/cup

                          Maybe in a month or so if it’s not done by then

                          1. 1

                            That would be awesome.

                        2. 2

                          Out of curiosity, why?

                          1. 3

                            you can get GCC with about 26 MB worth of downloads. Visual Studio is about 2 orders of magnitude above that. in addition i dont believe the Visual Studio compiler is open source.

                            so until that changes i will use and stick with projects that use GCC/Clang.

                            1. 2

                              MinGW allows you to cross-compile Windows binary without running Windows.

                          1. 9

                            As @WilhelmVonWeiner mentioned, I would invest in a copy of the The Design and Implementation of the 4.4BSD Operating System or The Design and Implementation of the BSD Operating System.

                            And I’d suggest starting my writing your own simple operating system kernel. Personally, I moved from there to studying Minix (this was many years ago). Minix was (and I hear still is) great for learning from. It’s different than Linux in that it is a microkernel architecture, that said you will learn a lot from it because it’s easy to read through and understand. Between writing your own and Minix, you’ll be off to a good start.

                            I still find the source for the various BSDs easier to follow than Linux and would suggest making that your next big move; graduate to a BSD if you will and from there, you could leap to Linux.

                            One other thing to consider. Picking a less used kernel to start hacking on when you feel comfortable might be a good idea if: you can find folks in that community to mentor you. In the end, the code base matters less than having people who are grateful for your assistance and want to help you learn. In my experience, smaller communities are more likely be ones you can find mentors in. That said, your mileage my vary greatly.

                            1. 1

                              Thanks! Any idea why the source for BSDs is easier to follow?

                              1. 2

                                Not a BSD, but Minix 2 was written with readability as nearly the only goal

                                1. 1

                                  I could take guesses based on number of people who are committers and the development process as to why that is the case, but in the end, it would be speculation. I know I’m not alone in this feeling, but I don’t know if I’m in the majority or minority.

                                2. 1

                                  Any pros and cons of starting with the 4.4 BSD Operating System vs FreeBSD Operating System book?

                                  1. 1

                                    I haven’t read the 2nd edition of the book so I can’t comment on that. Sorry.

                                1. 27

                                  It’s not about Linux per se, but it does relate to how operating systems work and a similar kernel: The Design and Implementation of the 4.4BSD Operating System. I bought it on the recommendation of John Carmack and the depth it goes into is great. Every chapter also has little quizzes without answers, so you can confirm to yourself you know how the described system components should work.

                                  1. 3

                                    Great book. Multiple pluses on this one.

                                    1. 2

                                      Thanks! I’ll give that book a look.

                                      1. 1

                                        Thanks!

                                      1. 7

                                        The irony is that he’s now trying to build better tools that use embedded DSLs instead of YAML files but the market is so saturated with YAML that I don’t think the new tools he’s working on have a chance of gaining traction and that seems to be the major part of angst in that thread.

                                        One of the analogies I like about the software ecosystem is yeast drowning in the byproducts of their own metabolic processes after converting sugar into alcohol. Computation is a magical substrate but we keep squandering the magic. The irony is that Michael initially sqauandered the magic and in the new and less magical regime his new tools don’t have a home. He contributed to the code-less mess he’s decrying because Ansible is one of the buggiest and slowest pieces of infrastructure management tools I’ve ever used.

                                        I suspect like all hype cycles people will figure it out eventually because ant used to be a thing and now it is mostly accepted that XML for a build system is a bad idea. Maybe eventually people will figure out that infrastructure as YAML is not sustainable.

                                        1. 1

                                          What alternative would you propose to DSLs or YAML?

                                          1. 4

                                            There are plenty of alternatives. Pulumi is my current favorite.

                                            1. 3

                                              Thanks for bringing Pulumi to my radar, I hadn’t heard of it earlier. It seems quite close to what I’m currently trying to master, Terraform. So I ended up here: https://pulumi.io/reference/vs/terraform.html – where they say

                                              Terraform, by default, requires that you manage concurrency and state manually, by way of its “state files.” Pulumi, in contrast, uses the free app.pulumi.com service to eliminate these concerns. This makes getting started with Pulumi, and operationalizing it in a team setting, much easier.

                                              Which to me seemed rather dishonest. Terraform’s way seems much more flexible and doesn’t tie me to Hashicorp if I don’t want that. Pulumi seems like a modern SAAS money vacuum: https://www.pulumi.com/pricing/

                                              The positive side, of course, is that doing many programmatic-like things in Terraform HCL is quite painful, like all non-turing programming tends to be when you stray from the path the language designers built for you … Pulumi handles that obviously much better.

                                              1. 3

                                                I work at Pulumi. To be 100% clear, you can absolutely manage a state file locally in the same way you can with TF.

                                                The service does have free tier though, and if you can use it, I think you should, as it is vastly more convenient.

                                                1. 3

                                                  You’re welcome to use a local state file the same way as in Terraform.

                                                2. 1

                                                  +100000

                                            1. 10

                                              In a burndown chart, I want to be able to run simulations as well. “What happens to this project if X work falls behind. What happens if Bob gets sick?”

                                              Is anything in software development predictable enough to make this kind of analysis useful?

                                              1. 8

                                                No, this is basically a management wish-fulfillment fantasy.

                                                Seeing that we thought something would take 8 hours of work but we spent 24 is incredibly valuable. We can revisit our assumptions we made when estimating see where we got it wrong. Then, we can try and account for it next time. Yes, estimating is hard but its also a skill you can get better at if you work at it and have support for proper tools.

                                                Likewise, I have heard this asserted by every manager I’ve ever worked with and for. No evidence has ever been presented, nor have estimates actually improved over time. (The usual anti-pattern is: estimates get worse over time because every time an estimate turns out to be low, a fix is proposed–“what if we do all-team estimates? what if we update estimates more frequently?”–which inevitably means spending more time on estimates, which means spending less time on development, which means we get slower and slower.)

                                                1. 7

                                                  I personally used to be quite bad at estimating. I’ve worked at it, I’ve gotten much better about estimating. There are things you can do to get much better at it. None of the things you’ve mentioned are ones I think would help. I plan on writing a post about the things I’ve learned and taught others that have helped make estimates more accurate.

                                                  1. 3

                                                    That would make great reading.

                                                    Are there any existing accounts of effective software schedule estimation you’d recommend?

                                                    1. 1

                                                      Two things I would recommend (and will be primary topics of said blog post).

                                                      Estimates slipping is usually about not accurately accounting for risk. Waltzing with Bears is a great book on dealing with risk management. The ideas in it might be overkill for many folks but the process of thinking about how you should account for risk and establishing your own practices is invaluable. The book is great even if you only use it as “that seems overblown, what if I…”.

                                                      The second is to record your estimates and why you made them. What did you know at the time that you made your estimate. Then, when your estimate is wrong, examine why. What didn’t you account for? When I first started doing this, I realized that most of my estimates were wrong because I didn’t bother to explore the problem enough and that I was being tripped up by not accounting for known knowns. Eventually I got better at that and started getting tripped up by known unknowns (that’s risk). I’ve since adopted some techniques for fleshing our risks when I am estimating and then front loading working on risks. If you think something might take a week or it might take a month, work on that first. Dig in to understand the problem so you can better estimate it. Maybe the problem is way harder than you imagine and you shouldn’t be doing the project at all. This isn’t a new concept but its one that is rarely used. In agile methodologies, its usually called a “spike”.

                                                      I’ve worked on projects that spanned over the course of months and we spent a couple weeks on estimation and planning. A big part of that time digging in, understanding the problem better, discussing it and coming up with what we needed to explore more to really understand the project so we could course correct as we went along.

                                                      1. 1

                                                        Ooh, a DeMarco book! Will definitely check it out. Thanks!

                                                    2. 1

                                                      Please do write this, I need to improve in this area.

                                                    3. 1

                                                      The wish-fulfilment does not exist in a vacuum.

                                                      Your customers might not be happy with your team constantly running late. Your pre-revenue startup might have a hard time raising investment. Whatever. There are external reasons for why a professional developer must be reliable in his or her estimates to actually get things out the door.

                                                      I’ve been changing my opinion on this back and forth. Especially in a pïss-poor startup, where the the biz guys wanted us to skip unit testing to achieve results faster, refusing to estimate was a fuck you. The code base got convoluted, but also dealing with how they represented things was frustrating.

                                                      I feel that in those cases the problem runs deeper in how geeks are supposed to be managed. Hell, it could be that estimation starts eating up time because the managers drove the geeks into protesting, which is - of course - as unprofessional as delivering late. Still you need to sort out your org for smooth ops before taking care of estimates.

                                                      Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.

                                                      1. 0

                                                        The fact that customers want something, or that it is necessary to the business’s success, do not cause it to become possible. I’m not disputing the desirability of accurate estimates. I’m disputing the idea that they are possible. I have not seen any team or technique generate such estimates reliably, over time, in various circumstances. (Of course, like any gambler with a “system,” sometimes people making estimates get lucky and they turn out to be right by coincidence.) I have heard many managers claim to have a system for reliable estimates which worked in some previous job; none was able to replicate that success on the teams I observed directly.

                                                        (It’s not just software, either. Many people point to the building trades as an example of successful estimation and scheduling. In my experience maintaining and restoring an old house and the experiences of friends and acquaintances who’ve undertaken more ambitious restorations, this is more wishful thinking. It’s common for estimates by restoration contractors on larger jobs to be off by months or years, and vast amounts of money. If so mature an industry can’t manage reliable scheduling, what hope is there for us?)

                                                        Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.

                                                        I’d argue that is exactly what much software development is like (except there is no substitute for the customer).

                                                        1. 1

                                                          Maybe I’d like to be more optimistic about learning to estimate better ;) But for sure @SeanTAllen touched on a lot of pertinent points. Is it Alice or Bob who gets the task? How well is the problem space, the code, known? And so on.

                                                          It’s hard as balls, and you’re not wrong with your gambler analogy, but not all systems for getting things right occasionally are equally unlikely to succeed. Renovators also learn what to look out for and how long those issues tend to take, as well as the interactions. Probabilities are usually ok by customers if that’s all you got.

                                                          In my car analogy, the point kinda was that we’re screwed because we can’t give out substitutes. We can deliver temporary hacks, although nothing is as permanent as a temporary hack.

                                                    4. 1

                                                      A lot of activities in software development try to improve predictability. For example: following style guidelines, continuous integration, unit testing, etc. All of these have a cost and slow developers down. The upside of course is to reduce bugs which will slow you down much more later. Or maybe not. The risk is generally too high, so we generally prefer the predictability of a slow incremental process.

                                                      I have a feeling that Demings thought about that when he talked about “variation”, but I need to read more from the father of Lean and grandfather of Agile to understand that. Currently, I believe that I don’t assign quite the correct meaning to the words I read from him.

                                                    1. 3

                                                      I am powerfully tempted to repartition one of my drives and give this a shot.

                                                      1. 5

                                                        Do it! :)

                                                        1. 1

                                                          I think you should

                                                        1. 13

                                                          Rich has been railing on types for the last few keynotes, but it looks to me like he’s only tried Haskell and Kotlin and that he hasn’t used them a whole lot, because some of his complaints look like complete strawmen if you have a good understanding and experience with a type system as sophisticated as Haskell’s, and others are better addressed in languages with different type systems than Haskell, such as TypeScript.

                                                          I think he makes lots of good points, I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory while designing his own type system (clojure.spec), and if he’s not, why he thinks other models don’t work either.

                                                          1. 14

                                                            One nit: spec is a contract system, not a type system. The former is often used to patch up a lack of the latter, but it’s a distinct concept you can do very different things with.

                                                            EDIT: to see how they can diverge, you’re probably better off looking at what Racket does than what Clojure does. Racket is the main “research language” for contracts and does some pretty fun stuff with them.

                                                            1. 4

                                                              It’s all fuzzy to me. They’re both formal specifications. They get overlapped in a lot of ways. Many types people are describing could be pre/post conditions and invariants in contract form for specific data or functions on them. Then, a contract system extended to handle all kinds of things past Boolean will use enough logic to be able to do what advanced type systems do.

                                                              Past Pierce or someone formally defining it, I don’t know as a formal, methods non-expert that contract and type systems in general form are fundamentally that different since they’re used the same in a lot of ways. Interchangeably, it would appear, if each uses equally powerful and/or automated logics.

                                                              1. 13

                                                                It’s fuzzy but there are differences in practice. I’m going to assume we’re using non-FM-level type systems, so no refinement types or dependent types for full proofs, because once you get there all of our intuition about types and contracts breaks down. Also, I’m coming from a contract background, not a type background. So take everything I say about type systems with a grain of salt.

                                                                In general, static types verify a program’s structure, while contracts verify its properties. Like, super roughly, static types are whether a program is sense or nonsense, while contracts are whether its correct or incorrect. Consider how we normally think of tail in Haskell vs, like, Dafny:

                                                                tail :: [a] -> [a]
                                                                
                                                                method tail(s: seq<T>) returns (o: seq<T>)
                                                                requires s.Len > 0
                                                                ensures s[0] + o = s
                                                                

                                                                The tradeoff is that verifying structure automatically is a lot easier than verifying semantics. That’s why historically static typing has been compile-time while contracts have been runtime. Often advances in typechecking subsumed use cases for contracts. See, for example, how Eiffel used contracts to ensure “void-free programming” (no nulls), which is subsumed by optionals. However, there are still a lot of places where they don’t overlap, such as in loop invariants, separation logic, (possibly existential contracts?), arguably smart-fuzzing, etc.

                                                                Another overlap is refinement types, but I’d argue that refinement types are “types that act like contracts” versus contracts being “runtime refinement types”, as most successful uses of refinement types came out of research in contracts (like SPARK) and/or are more ‘contracty’ in their formulations.

                                                                1. 3

                                                                  Is there anything contacts do that dependent types cannot?

                                                                  1. 2

                                                                    Fundamentally? Not really, nor vice versa. Both let you say arbitrary things about a function.

                                                                    In practice contracts are more popular for industrial work because they so far seem to map better to imperative languages than dependent types do.

                                                                    1. 1

                                                                      That makes sense, thanks! I’ve never heard of them. I mean I’ve probably seen people throw the concept around but I never took it for an actual thing

                                                              2. 1

                                                                I see the distinction when we talk about pure values, sum and product types. I wonder if the IO monad for example isn’t kind of more on the contract side of things. Sure it works as a type, type inference algorithms work with it, but the sife-effect thing makes it seem more like a pattern.

                                                              3. 18

                                                                I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory

                                                                Isn’t that his thing? He’s made proud statements about his disinterest in theory. And it shows. His jubilation about transducers overlooked that they are just a less generic form of ad-hoc polymorphism, invented to abstract over operations on collections.

                                                                1. 1

                                                                  wow, thanks for that, never really saw it that way but it totally makes sense. not a regular clojure user, but love lisp, and love the ML family of languages.

                                                                  1. 1

                                                                    So? Theory is useless without usable and easy implementation

                                                                  2. 6

                                                                    seemingly ignoring a lot of research in type theory

                                                                    I’ve come to translate this utterance as “it’s not Haskell”. Are there languages that have been hurt by “ignoring type theory research”? Some (Go, for instance) have clearly benefited from it.

                                                                    1. 12

                                                                      I don’t think rich is nearly as ignorant of Haskell’s type system as everyone seems to think. You can understand this stuff and not find it valuable and it seems pretty clear to me that this is the case. He’s obviously a skilled programmer who’s perspective warrants real consideration, people who are enamored with type systems shouldnt be quick to write him off even if they disagree.

                                                                      I don’t like dynamic languages fwiw.

                                                                      1. 3

                                                                        I dont think we can assume anything about what he knows. Even Haskellers here are always learning about its type system or new uses. He spends most of his time in a LISP. It’s safe to assume he knows more LISP benefits than Haskell benefits until we see otherwise in examples he gives.

                                                                        Best thing tl do is probably come up with lot of examples to run by him at various times/places. See what says for/against them.

                                                                        1. 9

                                                                          I guess I would want hear what people think he’s ignorant of because he clearly knows the basics of the type system, sum types, typeclasses, etc. The clojure reducers docs mention requiring associative monoids. I would be extremely surprised if he didn’t know what monads were. I don’t know how far he has to go for people to believe he really doesn’t think it’s worthwhile. I heard edward kmett say he didn’t think dependent types were worth the overhead, saying that the power to weight ratio simply wasn’t there. I believe the same about haskell as a whole. I don’t think it’s insane to believe that about most type systems and I don’t think hickey’s position stems from ignorance.

                                                                          1. 2

                                                                            Good examples supporting he might know the stuff. Now, we just need more detail to further test the claims on each aspect of languge design.

                                                                            1. 2

                                                                              From the discussions I see, it’s pretty clear to me that Rich has a better understanding of static typing and its trade offs than most Haskell fans.

                                                                      2. 11

                                                                        I’d love to hear in a detailed fashion how Go has clearly benefited from “ignoring type theory research”.

                                                                        1. 5

                                                                          Rust dropped GC by following that research. Several languages had race freedom with theirs. A few had contracts or type systems with similar benefits. Go’s developers ignored that to do a simpler, Oberon-2- and C-like language.

                                                                          There were two reasons. dmpk2k already said first, which Rob Pike said, that it was designed for anyone from any background to pick up easily right after Google hired them. Also, simplicity and consistency making it easy for them to immediately go to work on codebases they’ve never seen. This fits both Google’s needs and companies that want developers to be replaceable cogs.

                                                                          The other is that the three developers had to agree on every feature. One came from C. One liked stuff like Oberon-2. I dont recall the other. Their consensus is unlikely to be an Ocaml, Haskell, Rust, Pony, and so on. It was something closer to what they liked and understood well.

                                                                          If anything, I thought at the time they shouldve done something like Julia with a mix of productivity features, high C/Python integration, a usable subset people stick to, and macros for just when needed. Much better. I think a Noogler could probably handle a slighty-more-advanced language than Go. That team wanted otherwise…

                                                                          1. 2

                                                                            I have a hard time with a number of these statements:

                                                                            “Rust dropped GC by following that research”? So did C++ also follow research to “drop GC”? What about “C”? I’ve been plenty of type system conversation related to Rust but nothing that I would attribute directly to “dropping GC”. That seems like a bit of a simplification.

                                                                            Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared? I’ve seen Rob Pike talk about wanting to appeal to C and C++ programmers but nothing about ignorning type research. I’d be interested in hearing about that being done and what they thought the benefits were.

                                                                            It sounds like you are saying that the benefit is something familiar and approachable. Is that a benefit to the users of a language or to the language itself? Actually I guess that is more like, is the benefit that it made Go approachable and familiar to a broad swath of programmers and that allowed it to gain broad adoption?

                                                                            If yes, is there anything other than anecdotes (which I would tend to believe) to support that assertion?

                                                                            1. 10

                                                                              “That seems like a bit of a simplification.”

                                                                              It was. Topic is enormously complex. Gets worse when you consider I barely knew C++ before I lost my memory. I did learn about memory pools and reference counting from game developers who used C++. I know it keeps getting updated in ways that improve its safety. The folks that understand C++ and Rust keep arguing about how safe C++ is with hardly any argument over Rust since its safety model is baked thoroughly into the language rather than an option in a sea of options. You could say I’m talking about Rust’s ability to be as safe as a GC in most of an apps code without runtime checks on memory accesses.

                                                                              “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                                                                              Like with the Rich Hickey replies, this burden of proof is backwards asking us to prove a negative. If assessing what people knew or did, we should assume nothing until we see evidence in their actions and/or informed opinions that they did these things. Only then do we believe they did. I start by comparing what I’ve read of Go to Common LISP, ML’s, Haskell, Ada/SPARK, Racket/Ometa/Rascal on metaprogramming side, Rust, Julia, Nim, and so on. Go has almost nothing in it compared to these. Looks like a mix of C, Wirth’s stuff, CSP like old stuff in 1970’s-1980’s, and maybe some other things. Not much past the 1980’s. I wasn’t the first to notice either. Article gets point across despite its problems the author apologized for.

                                                                              Now, that’s the hypothesis from observation of Go’s features vs other languages. Lets test it on intent first. What was the goal? Rob Pike tells us here with Moray Taylor having a nicer interpretation. The quote:

                                                                              The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                              It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                                                                              So, they’re intentionally dumbing the language down as much as they can while making it practically useful. They’re doing this so smart people from many backgrounds can pick it up easily and go right to being productive for their new employer. It’s also gotta be C-like for the same reason.

                                                                              Now, let’s look at its prior inspirations. In the FAQ, they tell you the ancestors: “Go is mostly in the C family (basic syntax), with significant input from the Pascal/Modula/Oberon family (declarations, packages), plus some ideas from languages inspired by Tony Hoare’s CSP, such as Newsqueak and Limbo (concurrency).” They then make an unsubstantiated claim, in that section at least, that it’s a new language across the board to make programming better and more fun. In reality, it seems really close to a C-like version of the Oberon-2 experience one developer (can’t recall) wanted to recreate with concurrency and tooling for aiding large projects. I covered the concurrency angle in other comment. You don’t see a lot of advanced or far out stuff here: decades old tech that’s behind current capabilities. LISP’ers, metaprogrammers and REBOL’s might say behind old tech, too. ;)

                                                                              Now, let’s look at execution of these C, Wirth-like, and specific concurrency ideas into practice. I actually can’t find this part. I did stumble upon its in-depth history of design decisions. The thing I’m missing, if it was correct, is a reference to the claim that the three developers had to agree on each feature. If that’s true, it automatically would hold the language back from advanced stuff.

                                                                              In summary, we have a language designed by people who mostly didn’t use cutting-edge work in type systems, employed nothing of the sort, looked like languages from the 1970’s-1980’s, considered them ancestors, is admittedly dumbed-down as much as possible so anyone from any background can use it, and maybe involved consensus from people who didn’t use cutting-edge stuff (or even much cutting-edge at 90’s onward). They actually appear to be detractors to a lot of that stuff if we consider the languages they pushed as reflecting their views on what people should use. Meanwhile, the languages I mentioned above used stuff from 1990’s-2000’s giving them capabilities Go doesn’t have. I think the evidence weighs strongly in favor of that being because designers didn’t look at it, were opposed to it for technical and/or industrial reasons, couldn’t reach a consensus, or some combo.

                                                                              That’s what I think of Go’s history for now. People more knowledgeable feel free to throw any resources I might be missing. It just looks to be a highly-practical, learn/use-quickly, C/Oberon-like language made to improve onboarding and productivity of random developers coming into big companies like Google. Rob Pike even says that was the goal. Seems open and shut to me. I thank the developers of languages like Julia and Nim believing we were smart enough to learn a more modern language, even if we have to subset them for inexperienced people.

                                                                          2. 4

                                                                            It’s easy for non-LtU programmers to pick up, which happens to be the vast majority.

                                                                            1. 3

                                                                              Sorry, that isn’t detailed. Is there evidence that its easy for these programmers to pick up? What does “easy to pick up” mean? To get something to compile? To create error-free programs? “Clearly benefited” is a really loaded term that can mean pretty much anything to anyone. I’m looking for what the stated benefits are for Go. Is the benefit to go that it is “approachable” and “familiar”?

                                                                              There seems to be an idea in your statement then that using any sort of type theory research will inherintly make something hard to pick up. I have a hard time accepting that. I would, without evidence, be willing to accept that many type system ideas (like a number of them in Pony) are hard to pick up, but the idea that you have to ignore type theory research to be easy to pick up is hard for me to accept.

                                                                              Could I create a language that ignores type system theory but using a non-familiar syntax and not be easy to pick up?

                                                                              1. 5

                                                                                I already gave you the quote from Pike saying it was specifically designed for this. Far as the how, I think one of its designers explains it well in those slides. The Guiding Principles section puts simplicity above everything else. Next, a slide says Pascal was a minimalist language designed for teaching non-programmers to code. Oberon was similarly simple. Oberon-2 added methods on records (think simpler OOP). The designer shows Oberon-2 and Go code saying it’s C’s syntax with Oberon-2’s structure. I’ll add benefits like automatic, memory management.

                                                                                Then, the design link said they chose CSP because (a) they understood it enough to implement and (b) it was the easiest thing to implement throughout the language. Like Go itself, it was the simplest option rather than the best along many attributes. There were lots of people who picked up SCOOP (super-easy but with overhead) with probably even more picking up Rust’s method grounded in affine types. Pony is itself doing clever stuff using advances in language. Go language would ignore those since (a) Go designers didn’t know them well from way back when and (b) would’ve been more work than their intent/budget could take.

                                                                                They’re at least consistent about simplicity for easy implementation and learning. I’ll give them that.

                                                                            2. 3

                                                                              It seems to me that Go was clearly designed to have a well-known, well-understood set of primitives, and that design angle translated into not incorporating anything fundamentally new or adventurous (unlike Pony and it’s impressive use of object capabilities). It looked already old at birth, but it feels impressively smooth, in the beginning at least.

                                                                              1. 3

                                                                                I find it hard to believe that CSP and Goroutines were “well-understood set of primitives”. Given the lack of usage of CSP as a mainstream concurrency mechanism, I think that saying that Go incorporates nothing fundamentally new or adventurous is selling it short.

                                                                                1. 5

                                                                                  CSP is one of oldest ways people modeled concurrency. I think it was built on Hoare’s monitor concept from years before which Per Brinch Hansen turned into Concurrent Pascal. Built Solo OS with mix of it and regular Pascal. It was also typical in high-assurance to use something like Z or VDM for specifying main system with concurrency done in CSP and/or some temporal logic. Then, SPIN became dominant way to analyze CSP-like stuff automatically with a lot of industrial use for a formal method. Lots of other tools and formalisms existed, though, under banner of process algebras.

                                                                                  Outside of verification, the introductory text that taught me about high-performance, parallel computing mentioned CSP as one of basic models of parallel programming. I was experimenting with it in maybe 2000-2001 based on what those HPC/supercomputing texts taught me. It also tied into Agent-Oriented Programming I was looking into then given they were also concurrent, sequential processes distributed across machines and networks. A quick DuckDuckGo shows a summary article on Wikipedia mentions it, too.

                                                                                  There were so many courses teaching and folks using it that experts in language design and/or concurrency should’ve noticed it a long time ago trying to improve on it for their languages. Many did, some doing better. Eiffel SCOOP, ML variants like Concurrent ML, Chapel, Clay with Wittie’s extensions, Rust, and Pony are examples. Then you have Go doing something CSP-like (circa 1970’s) in the 2000’s still getting race conditions and stuff. What did they learn? (shrugs) I don’t know…

                                                                                  1. 11

                                                                                    Nick,

                                                                                    I’m going to take the 3 different threads of conversation we have going and try to pull them all together in this one reply. I want to thank you for the time you put into each answer. So much of what appears on Reddit, HN, and elsewhere is throw away short things that often feel lazy or like communication wasn’t really the goal. For a long time, I have appreciated your contributions to lobste.rs because there is a thoughtfulness to them and an attempt to convey information and thinking that is often absent in this medium. Your replies earlier today are no exception.


                                                                                    Language is funny.

                                                                                    You have a very different interpretation of the words “well-understood primitives” than I do. Perhaps it has something to do with anchoring when I was writing my response. I would rephrase my statement this way (and I would still be imprecise):

                                                                                    While CSP has been around for a long time, I don’t that prior to Go, that is was a well known or familiar concurrency model for most programmers. From that, I would say it isn’t “well-understood”. But I’m reading quite a bit, based on context into what “well-understood” means here. I’m taking it to me, “widely understood by a large body of programmers”.

                                                                                    And I think that your response Nick, I think it actually makes me believe that more. The languages you mention aren’t ones that I would consider familiar or mainstream to most programmers.

                                                                                    Language is fun like that. I could be anchoring myself again. I rarely ask questions on lobste.rs or comment. I decided to on this occasion because I was really curious about a number of things from an earlier statement:

                                                                                    “Go has clearly benefited from “ignoring type theory research”.

                                                                                    Some things that came to mind when I read that and I wondered “what does this mean?”

                                                                                    “clearly benefited”

                                                                                    Hmmm, what does benefit mean? Especially in reference to a language. My reading of benefit is that “doing X helped the language designers achieve one or more goals in a way that had acceptable tradeoffs”. However, it was far from clear to me, that is what people meant.

                                                                                    “ignoring type theory research”

                                                                                    ignoring is an interesting term. This could mean many things and I think it has profound implications for the statement. Does ignoring mean ignorance? Does it mean willfully not caring? Or does it mean considered but decided not to use?

                                                                                    I’m familiar with some of the Rob Pike and Go early history comments that you referenced in the other threads. In particular related to the goal of Go being designed for:

                                                                                    The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                    It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.

                                                                                    I haven’t found anything though that shows there was a willful disregard of type theory. I wasn’t attempting to get you to prove a negative, more I’m curious. Has the Go team ever said something that would fall under the heading of “type system theory, bah we don’t need it”. Perhaps they have. And if they have, is there anything that shows a benefit from that.

                                                                                    There’s so much that is loaded into those questions though. So, I’m going to make some statements that are possibly open to being misconstrued about what from your responses, I’m hearing.

                                                                                    “Benefit” here means “helped make popular” because Go on its surface, presents a number of familiar concepts for the programmer to work with. There’s no individual primitive that feels novel or new to most programmers except perhaps the concurrency model. However, upon the first approach that concurrency model is fairly straightforward in what it asks the programmer to grasp when first encountering it. Given Go’s stated goals from the quote above. It allows the programmers to feel productive and “build good software”.

                                                                                    Even as I’m writing that though, I start to take issue with a number of the assumptions that are built into the Pike quote. But that is fine. I think most of it comes down to for me what “good software” is and what “simple” is. And those are further loaded words that can radically change the meaning of a comment based on the reader.

                                                                                    So let me try again:

                                                                                    When people say “Go has clearly benefited from “ignoring type theory research” what they are saying is:

                                                                                    Go’s level of popularity is based, in part, on it providing a set of ideas that should be mostly familiar to programmers who have some experience with the Algol family of languages such as C, C++, Python, Ruby etc. We can further refine that to say that from the Algol family of languages that we are really talking about ones that have type systems that make few if any guarantees (like C). That Go put this familiarity as its primary goal and because of that, is popular.

                                                                                    Would you say that is a reasonable summation?


                                                                                    When I asked:

                                                                                    “Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”

                                                                                    I wasn’t asking you for to prove a negative. I was very curious if any such statements existed. I’ve never seen any. I’ve drawn a number of conclusions about Go based mostly on the Rob Pike quote you provided earlier. I was really looking for “has everyone else as well” or do they know things that I don’t know.

                                                                                    It sounds like we are both mostly operating on the same set of information. That’s fine. We can draw conclusions from that. But I feel at least good in now saying that both you and I are inferring things based on what appears to be mostly shared set of knowledge here and not that I am ignorant of statements made by Go team members.

                                                                                    I wasn’t looking for proof. I was looking for information that might help clear up my ignorance in the area. Related to my ignorance.

                                                                                    1. 3

                                                                                      I appreciate that you saw I was trying to put effort into it being productive and civil. Those posts took a while. I appreciate your introspective and kind reply, too. Now, let’s see where we’re at with this.

                                                                                      Yeah, it looks like we were using words with a different meaning. I was focused on well-understood by PLT types that design languages and folks studying parallelism. Rob Pike at the least should be in both categories following that research. Most programmers don’t know about it. You’re right that Go could’ve been first time it went mainstream.

                                                                                      You also made a good point that it’s probably overstating it to say they never considered. I have good evidence they avoided almost all of it. Other designers didn’t. Yet, they may have considered it (how much we don’t know), assessed it against their objectives, and decided against all of it. The simplest approach would be to just ask them in a non-confrontational way. The other possibility is to look at each’s work to see if it showed any indication they were considering or using such techniques in other work. If they were absent, saying they didn’t consider it in their next work would be reasonable. Another angle would be to look at, like with C’s developers, whether they had a personal preference for simpler or barely any typing consistently avoiding developments in type systems. Since that’s lots of work, I’ll leave it at “Unknown” for now.

                                                                                      Regarding its popularity, I’ll start by saying I agree its simple design reusing existing concepts was a huge element of that. It was Wirth’s philosophy to do same thing for educating programmers. Go adopted that philosophy to modern situation. Smart move. I think you shouldn’t underestimate the fact that Google backed it, though.

                                                                                      There were a lot of interesting languages over the decades with all kinds of good tradeoffs. The ones with major, corporate backing and/or on top of advantageous foundations/ecosystems (eg hardware or OS’s) usually became big in a lasting way. That included COBOL on mainframes, C on cheap hardware spreading with UNIX, Java getting there almost entirely through marketing given its technical failures, .NET/C# forced by Microsoft on its huge ecosystem, Apple pushing Swift, and some smaller ones. Notice the language design is all across the board here in complexity, often more complex than existing languages. The ecosystem drivers, esp marketing or dominant companies, are the consistent thread driving at least these languages’ mass adoption.

                                                                                      Now, mighty Google claims they’re backing for their massive ecosystem a new language. It’s also designed by celebrity researchers/programmers, including one many in C community respect. It might also be a factor in whether developers get a six digit job. These are two, major pulls plus a minor one that each in isolation can draw in developers. Two, esp employment, will automatically make a large number of users if they think Google is serious. Both also have ripple effects where other companies will copy what big company is doing to not get left behind. Makes the pull larger.

                                                                                      So, as I think of your question, I have that in the back of my mind. I mean, those effects pull so hard that Google’s language could be a total piece of garbage and still have 50,000-100,000 developers just going for a gold rush. I think that they simplified the design to make it super-easy to learn and maintain existing code just turbocharges that effect. Yes, I think the design and its designers could lead to significant community without Google. I’m just leaning toward it being a major employer with celebrity designers and fanfare causing most of it.

                                                                                      And then those other languages start getting uptake despite advanced features or learning troubles (esp Rust). Shows they Go team could’ve done better on typing using such techniques if they wanted to and/or knew about those techniques. I said that’s unknown. Go might be the best they could do in their background, constraints, goals, or whatever. Good that at least four, different groups made languages to push programming further into the 90’s and 2000’s instead of just 70’s to early 80’s. There’s at least three creating languages closer to C generating a lot of excitement. C++ is also getting updates making it more like Ada. Non-mainstream languages like Ada/SPARK and Pony are still getting uptake even though smaller.

                                                                                      If anything, the choices of systems-type languages is exploding right now with something for everyone. The decisions of Go’s language authors aren’t even worth worrying about since that time can be put into more appropriate tools. I’m still going to point out that Rob Pike quote to people to show they had very, very specific goals which made a language design that may or may not be ideal for a given task. It’s good for perspective. I don’t know designers’ studies, their tradeoffs, and (given alternatives) they barely matter past personal curiosity and PLT history. That also means I’ll remain too willfully ignorant about it to clear up anyone’s ignorance. At least till I see some submissions with them talking about it. :)

                                                                                      1. 2

                                                                                        Thanks for the time you put into this @nickpsecurity.

                                                                                        1. 1

                                                                                          Sure thing. I appreciate you patiently putting time into helping me be more accurate and fair describing Go designers’ work.

                                                                                          1. 2

                                                                                            And thank you, I have a different perspective on Go now than I did before. Or rather, I have a better understanding of other perspectives.

                                                                          3. 6

                                                                            I don’t see anything of substance in this comment other than “Haskell has a great type system”.

                                                                            I just watched the talk. Rich took a lot of time to explain his thoughts carefully, and I’m convinced by many of his points. I’m not convinced by anything in this comment because there’s barely anything there. What are you referring to specifically?

                                                                            edit: See my perspective here: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe

                                                                            1. 3

                                                                              That wasn’t my point at all. I agree with what Rich says about Maybes in this talk, but it’s obvious from his bad Haskell examples that he hasn’t spent enough time with the language to justify criticizing its type system so harshly.

                                                                              Also, what he said about representing the idea of a car with information that might or might not be there in different parts of a program might be correct in Haskell’s type system, but in languages with structural subtyping (like TypeScript) or row polymorphism (like Ur/Web) you can easily have a function that takes a car record which may be missing some fields, fills some of them out and returns an object which has a bit more fields than the other one, like Rich described at some point in the talk.

                                                                              I’m interested to see where he’s gonna end up with this, but I don’t think he’s doing himself any favors by ignoring existing research in the same fields he’s thinking about.

                                                                              1. 5

                                                                                But if you say that you need to go to TypeScript to express something, that doesn’t help me as a Haskell user. I don’t start writing a program in a language with one type system and then switch into a language with a different one.

                                                                                Anyway, my point is not to have a debate on types. My point is that I would rather read or watch an opinion backed up by real-world experience.

                                                                                I don’t like the phrase “ignoring existing research”. It sounds too much like “somebody told me this type system was good and I’m repeating it”. Just because someone published a paper on it, doesn’t mean it’s good. Plenty of researchers disagree on types, and admit that there are open problems.

                                                                                There was just one here the other day!

                                                                                https://lobste.rs/s/dldtqq/ast_typing_problem

                                                                                I’ve found that the applicability of types is quite domain-specific. Rich Hickey is very clear about what domains he’s talking about. If someone makes general statements about type systems without qualifying what they’re talking about, then I won’t take them very seriously.

                                                                            2. 4

                                                                              I don’t have a good understanding of type systems. What is it that Rich misses about Haskells Maybe? Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                                                                              1. 24

                                                                                Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?

                                                                                It does in a way, but I think people sometimes over-estimate the amount of changes that are required. It depends on whether or not really really care about the returned value. Let’s look at a couple of examples:

                                                                                First, let’s look at an example. Let’s say that we had a function that was going to get the first element out of a list, so we start out with something like:

                                                                                getFirstElem :: [a] -> a
                                                                                getFirstElem = head
                                                                                

                                                                                Now, we’ll write a couple of functions that make use of this function. Afterwards, I’ll change my getFirstElem function to return a Maybe a so you can see when, why, and how these specific functions need to change.

                                                                                First, let’s imagine that I have some list of lists, and I’d like to just return a single list that has the first element; for example I might have something like ["foo","bar","baz"] and I want to get back "fbb". I can do this by calling map over my list of lists with my getFirstElem function:

                                                                                getFirsts :: [[a]] -> [a]
                                                                                getFirsts = map getFirstElem
                                                                                

                                                                                Next, say we wanted to get an idea of how many elements we were removing from our list of lists. For example, in our case of ["foo","bar","baz"] -> "fbb", we’re going from a total of 9 elements down to 3, so we’ve eliminated 6 elements. We can write a function to help us figure out how many elements we’ve dropped pretty easily by looking at the sum of the lengths of the lists in the input lists, and the overall length of the output list.

                                                                                countDropped :: [[a]] -> [b] -> Int
                                                                                countDropped a b =
                                                                                  let a' = sum $ map length a
                                                                                      b' = length b
                                                                                  in a' - b'
                                                                                

                                                                                Finally, we probably want to print out our string, so we’ll use print:

                                                                                printFirsts =
                                                                                  let l = ["foo","bar","baz"]
                                                                                      r = getFirsts l
                                                                                      d = countDropped l r
                                                                                  in print l >> print r >> print d
                                                                                

                                                                                Later, if we decide that we want to change our program to look at ["foo","","bar","","baz"]. We’ll see our program crashes! Oh no! the problem is that head doesn’t work with an empty list, so we better go and update it. We’ll have it return a Maybe a so that we can capture the case where we actually got an empty list.

                                                                                getFirstElem :: [a] -> Maybe a
                                                                                getFirstElem = listToMaybe
                                                                                

                                                                                Now we’ve changed our program so that the type system will explicitly tell us whether we tried to take the head of an empty list or not- and it won’t crash if we pass one in. So what refactoring do we have to do to our program?

                                                                                Let’s walk back through our functions one-by-one. Our getFirsts function had the type [[a]] -> [a] and we’ll need to change that to [[a]] -> [Maybe a] now. What about the code?

                                                                                If we look at the type of map we’ll see that it has the type: map :: (c -> d) -> [c] -> [d]. Since both [[a]] -> [a] and [[a]] -> [Maybe a] satisfy the constraint [a] -> [b], (in both cases, c ~ [a], in the first case, d ~ a and in the second d ~ Maybe a). In short, we had to fix our type signature, but nothing in our code has to change at all.

                                                                                What about countDropped? Even though our types changed, we don’t have to change anything in countDropped at all! Why? Because countDropped is never looking at any values inside of the list- it only cares about the structure of the lists (in this case, how many elements they have).

                                                                                Finally, we’ll need to update printFirsts. The type signature here doesn’t need to change, but we might want to change the way that we’re printing out our values. Technically we can print a Maybe value, but we’d end up with something like: [Maybe 'f',Nothing,Maybe 'b',Nothing,Maybe 'b'], which isn’t particularly readable. Let’s update it to replace Nothing values with spaces:

                                                                                printFirsts :: IO ()
                                                                                printFirsts =
                                                                                  let l = ["foo","","bar","","baz"]
                                                                                      r = map (fromMaybe ' ') $ getFirsts' l
                                                                                      d = countDropped l r
                                                                                  in print l >> print r >> print d
                                                                                

                                                                                In short, from this example, you can see that we can refactor our code to change the type, and in most cases the only code that needs to change is code that cares about the value that we’ve changed. In an untyped language you’d expect to still have to change the code that cares about the values you’re passing around, so the only additional changes that we’ve had to do here was a very small update to the type signature (but not the implementation) of one function. In fact, if I’d let the type be inferred (or written a much more general function) I wouldn’t have had to even do that.

                                                                                There’s an impression that the types in Haskell require you to do a lot of extra work when refactoring, but in practice the changes you are making aren’t materially more or different than the ones you’d make in an untyped language- it’s just that the compiler will tell you about the changes you need to make, so you don’t need to find them through unit tests or program crashes.

                                                                                1. 3

                                                                                  countDropped should be changed. To what will depend on your specification but as a simple inspection, countDropped ["", "", "", ""] [None, None, None, None] will return -4, which isn’t likely to be what you want.

                                                                                  1. 3

                                                                                    That’s correct in a manner of speaking, since we’re essentially computing the difference between the number of characters in all of the substrings minutes the length of the printed items. Since [""] = [[]], but is printed " ", we print one extra character (the space) compared to the total length of the string, so a negative “dropped” value is sensible.

                                                                                    Of course the entire thing was a completely contrived example I came up with while I was sitting at work trying to get through my morning coffee, and really only served to show “sometimes we don’t need to change the types at all”, so I’m not terribly worried about the semantics of the specification. You’re welcome to propose any other more sensible alternative you’d like.

                                                                                    1. -3

                                                                                      That’s correct in a manner of speaking, since …

                                                                                      This is an impressive contortion, on par with corporate legalese, but your post-hoc justification is undermined by the fact that you didn’t know this was the behavior of your function until I pointed it out.

                                                                                      Of course the entire thing was a completely contrived example …

                                                                                      On this, we can agree. You created a function whose definition would still typecheck after the change, without addressing the changed behavior, nor refuting that in the general case, Maybe T is not a supertype of T.

                                                                                      You’re welcome to propose any other more sensible alternative you’d like.

                                                                                      Alternative to what, Maybe? The hour long talk linked here is pretty good. Nullable types are more advantageous, too, like C#’s int?. The point is that if you have a function and call it as f(0) when the function requires its first argument, but later, the requirement is “relaxed”, all the places where you wrote f(0) will still work and behave in exactly the same way.

                                                                                      Getting back to the original question, which was (1) “what is it that Rich Hickey doesn’t understand about types?” and, (2) “does changing the return type from Maybe T to T cause calling code to break?”. The answer to (2) is yes. The answer to (1), given (2), is nothing.

                                                                                      1. 9

                                                                                        I was actually perfectly aware of the behavior, and I didn’t care because it was just a small toy example. I was just trying to show some examples of when and how you need to change code and/or type signatures, not write some amazing production quality code to drop some things from a list. No idea why you’re trying to be such an ass about it.

                                                                                        1. 3

                                                                                          She did not address question (1) at all. You are reading her response to question (2) as implying something about (1) that makes your response needlessly adverse.

                                                                                    2. 2

                                                                                      This is a great example. To further reinforce your point, I feel like the one place Haskell really shows it’s strength in these refactors. It’s often a pain to figure out what the correct types should be parts of your programs, but when you know this and make a change, the Haskell compiler becomes this real guiding light when working through a re-factor.

                                                                                    3. 10

                                                                                      He explicitly makes the point that “strengthening a promise”, that is from “I might give you a T” to “I’ll definitely give you a T” shouldn’t necessarily be a breaking change, but is in the absence of union types.

                                                                                      1. 2

                                                                                        Half baked thought here that I’m just airing to ask for an opinion on:

                                                                                        Say as an alternative, the producer produces Either (forall a. a) T instead of Maybe T, and the consumer consumes Either x T. Then the producer’s author changes it to make a stronger promise by changing it to produce Either Void T instead.

                                                                                        I think this does what I would want? This change hasn’t broken the consumer because x would match either alternative. The producer has strengthened the promise it makes because now it promises not to produce a Left constructor.

                                                                                        1. 4

                                                                                          When the problem is “I can’t change my mind after I had insufficient forethought”, requiring additional forethought is not a solution.

                                                                                          1. 2

                                                                                            So we’d need a way to automatically rewrite Maybe t to Either (forall a. a) t everywhere - after the fact. ;)

                                                                                    4. 2

                                                                                      Likewise, I wonder what he thinks about Rust’s type system to ensure temporal safety without a GC. Is safe, no-GC operation in general or for performance-critical modules desirable for Clojure practitioners? Would they like a compile to native option that integrates that safe, optimized code with the rest of their app? And if not affine types, what’s his solution that doesn’t involve runtime checks that degrade performance?

                                                                                      1. 7

                                                                                        I’d argue that GC is a perfectly fine solution in vast majority of cases. The overhead from advanced GC systems like the one on the JVM is becoming incredibly small. So, the scenarios where you can’t afford GC are niche in my opinion. If you are in such a situation, then types do seem like a reasonable way to approach the problem.

                                                                                        1. 3

                                                                                          I have worked professionally in Clojure but I have never had to make a performance critical application with it. The high performance code I have written has been in C and CUDA. I have been learning Rust in my spare time.

                                                                                          I argue that both Clojure and Rust both have thread safe memory abstractions, but Clojure’s solution has more (theoretical) overhead. This is because while Rust uses ownership and affine types, Clojure uses immutable data structures.

                                                                                          In particular, get/insert/remove for a Rust HashMap is O(1) amortized while Clojure’s corresponding hash-map’s complexity is O(log_32(n)) for those operations.

                                                                                          I haven’t made careful benchmarks to see how this scaling difference plays out in the real world, however.

                                                                                          1. 4

                                                                                            Having used clojure’s various “thread safe memory abstractions” I would say that the overhead is actual not theoretical.

                                                                                      2. 2

                                                                                        Disclaimer: I <3 types a lot, Purescript is lovely and whatnot

                                                                                        I dunno, I kinda disagree about this. Even in the research languages, people are opting for nominal ADTs. Typescript is the exception, not the rule.

                                                                                        His wants in this space almost require “everything is a dictionary/hashmap”, and I don’t think the research in type theory is tackling his complaints (the whole “place-oriented programming” stuff and positional argument difficulties ring extremely true). M…aybe row types, but row types are not easy to use compared to the transparent and simple Typescript model in my opinion.

                                                                                        Row types help o solve issues generated in the ADT universe, but you still have the nominal typing problem which is his other thing.

                                                                                        His last keynote was very agressive and I think people wrote it off because it felt almost ignorant, but I think this keynote is extremely on point once he gets beyond the maybe railing in the intro

                                                                                      1. 3

                                                                                        “When pursuing a vertical scaling strategy, you will eventually run up against limits. You won’t be able to add more memory, add more disk, add more “something.” When that day comes, you’ll need to find a way to scale your application horizontally to address the problem.”

                                                                                        I should note the limit last I checked was SGI UV’s for data-intensive apps (eg SAP Hana) having 64 sockets with multicore Xeons, 64TB RAM, and 500ns max latency all-to-all communication. I swore one had 256 sockets but maybe misremembering. Most NUMA’s also have high-availability features (eg “RAS”). So, if it’s one application per server (i.e. just DB), many businesses might never run into a limit on these things. The main limit I saw studying NUMA machines was price: scaling up cost a fortune compared to scaling out. One can get stuff in low-to-mid, five digits now that previously cost six to seven. Future-proofing scale up by starting with SGI-style servers has to be more expensive, though, than scale-out start even if scale-out starts on a beefy machine.

                                                                                        You really should modify the article to bring pricing up. The high price of NUMA machines was literally the reason for inventing Beowful clusters which pushed a lot of the “spread it out on many machines” philosophy towards mainstream. The early companies selling them always showed the price of eg a 64-256 core machine by Sun/SGI/Cray vs cluster of 2-4 core boxes. First was price of mini-mansion (or castle if clustered NUMA’s) with second ranging from new car to middle-class house. HA clustering goes back further with VMS, NonStop, and mainframe stuff. I’m not sure if cost pushed horizontal scaling for fault-tolerance to get away from them or if folks were just building on popular ecosystems. Probably a mix but I got no data.

                                                                                        “The number of replicas, also known as “the replication factor,” allows us to survive the loss of some members of the system (usually referred to as a “cluster”). “

                                                                                        I’ll add that each could experience the same failure, esp if we’re talking attacks. Happened to me in a triple, modular redundancy setup with a single component faulty. On top of replication, I push hardware/software diversity much as one’s resources allow. CPU’s built on different tools/nodes. Different mobo’s and UPS’s. Maybe optical connections if worried about electrical stuff. Different OS’s. Different libraries if they perform identical function. Different compilers. And so on. The thing that’s the same on each node is one app you’re wanting to work. Even it might be several written by different people with cluster having a mix of them. The one thing that has to be shared is the protocol for starting it all up, syncing the state, and recovering from problems. Critical layers like that should get the strongest verification the team can afford with SQLite and FoundationDB being the exemplars in that area.

                                                                                        Then, it’s really replicated in a fault-isolating way. It’s also got a lot of extra failure modes one has to test for. Good news is several companies and/or volunteers can chip in each working on one of the 3+ hardware/software systems. Split the cost up. :)

                                                                                        1. 2

                                                                                          Those are planned subjects for future posts. I wanted to keep this one fairly simplistic for folks who aren’t experts in the area.

                                                                                          1. 1

                                                                                            That makes sense. Thanks for clarifying.

                                                                                        1. 5

                                                                                          I rarely get hyped by a talk, but this one definitely spoke to me. This is awesome work!

                                                                                          1. 3

                                                                                            I spent the day beforehand trying to get Edwin to change his talk to be about the rules of Cricket. Probably best that he ignored me.

                                                                                            1. 1

                                                                                              As an american that has looked glancingly at cricket rules, I am happy this was ignored. >.<

                                                                                          1. 7

                                                                                            One thing I would love to read more about is how to determine the cutoff between scaling horizontally and investing time/money in optimizing your software.

                                                                                            1. 2

                                                                                              Oooo, that’s a good one. I have a couple hand-wavey heuristics. I’ll think more about trying to turn that into something more real.

                                                                                              I have the next 2-3 posts vaguely planned out. But, I’ll definitely be thinking about your idea @hwayne.

                                                                                              1. 1

                                                                                                Doesn’t it boil down to, basically, “it’s a knack, spend 10 years doing it and you’ll get good at judgment”?

                                                                                                1. 2

                                                                                                  I’d definitely take a more “business” driven approach to that problem. Optimizing your software for cost should only be done if the money it saves is superior to what it costs to optimize it.

                                                                                                  You also have to take into account indirect costs like when using weird optimization tricks can make code less readable sometimes and also has a cost for future development.

                                                                                                  1. 1

                                                                                                    On the other hand, scaling horizontally adds costs of coordinating multiple servers, which includes load balancing, orchestrating deployments, distributed system problems, etc.

                                                                                                    1. 1

                                                                                                      The business-driven approach is not always the best for society as a whole. It doesn’t take into account negative externalities like the environmental cost of running inefficient programs and runtimes.

                                                                                                    2. 2

                                                                                                      I had a few that helped a lot:

                                                                                                      1. Use the fastest components. Ex: Youtube used lighttpd over Apache.

                                                                                                      2. If caching can help, use it. Try different caching strategies.

                                                                                                      3. If it’s managed, use a system language and/or alternative GC.

                                                                                                      4. If fast path is small, write the data-heavy part in an optimizable language using best algorithms for that. Make sure it’s cache and HD-layout friendly. Recent D submission is good example.

                                                                                                      5. If it’s parallelizable, rewrite the fast path in a parallel, programming language or using such a library. Previously, there was PVM, MPI, Cilk, and Chapel. The last one is designed for scale-up, scale-out, and easy expression simultaneously. Also, always use a simpler, lighter solution like that instead of something like Hadoop or SPARK if possible.

                                                                                                      6. Whole-program, optimizing compilers (esp profile-guided) if possible. I used SGI’s for this at one point. I’m not sure if LLVM beats all of them or even has a profile-guided mode itself. Haven’t looked into that stuff in a while.

                                                                                                      Notice that most of this doesn’t take much brains. A few take little to no time either. They usually work, too, giving anything from a small to vast improvement. So, they’re some of my generic options. Via metaprogramming or just good compiler, I can also envision them all integrated into one language with compiler switches toggling the behavior. Well, except choosing fastest component or algorithm. Just the incidental stuff.

                                                                                                1. 4

                                                                                                  I worked on parts of this release and there’s some neat things, including a streamlined Python API and the ability to run Connectors in Python 3. If you have any questions about this release or you want to talk about stream processing in Python please post something here.

                                                                                                  1. 1

                                                                                                    Not about this, but more generally: what’s the best introduction to why I should wallaroo rather than any other distributed processing doobry?

                                                                                                    Also why isn’t there a pony api?

                                                                                                    1. 3

                                                                                                      Ugh, I typed in a nice long answer and lost it when I hit the back button. Blarg. This version will be shorter.

                                                                                                      There is a Pony API but its “unsupported” meaning we don’t document it. Documenting it would mean time spent on upkeep. If people start requesting the Pony API, that could change.

                                                                                                      Re: why use Wallaroo vs something else. We don’t have anything specifically comparing Wallaroo to other systems. There’s a lot of good Wallaroo related material on the blog: https://blog.wallaroolabs.com/

                                                                                                      The number #1 reason that people are interested in Wallaroo’s Python API is they are interested in doing streaming with Python, don’t want to be tied to a specific ingest source, and aren’t interested in the overhead that comes from using a system that is written in Java and involves shuffling data back and forth between Python and Java processes.

                                                                                                      However, there are other reasons that people have as well. If you have specific use cases for streaming that you are interested in, I’d be happy to discuss the pros and cons of Wallaroo for those use-cases.

                                                                                                  1. 15

                                                                                                    To me, this is more of a grab bag of random things rather than a well thought out argument for uniqueness. I still enjoyed it.

                                                                                                    And it did make me think that I’d love to read an article that is about what made BeOS unique at the time (from one of us who was part of the community then) and also what makes Haiku unique now (from someone who is part of the community now). I’d love to read that article (or two articles if the authors worked together to make them a nice pairing).

                                                                                                    The what made BeOS unique might be harder because you’d need to give a lot of background around why some things were revolutionary then for not just the tech but the effect that are fairly commonplace feeling now.

                                                                                                    Anyway, enough babbling.

                                                                                                    1. 5

                                                                                                      I was part of the BeOS community almost 2 decades ago, pretty much from the time they ported it to x86.

                                                                                                      A few things that made it unique at the time: 64 bit filesystem that could be queried like a database; incredible support for multiprocessing and low-latency that allowed for professional quality media editing apps; an API that made programmers want to write software for it even when there weren’t that many users.

                                                                                                    1. 2

                                                                                                      I’m curious. Why?

                                                                                                      I wouldn’t think to compare D and Nim much less put in the effort to maintain a comparison repo.

                                                                                                      Color me curious. Anyone have any insight?

                                                                                                      1. 4

                                                                                                        They’re both C++ replacements that claim to be better. They have some feature overlap. A comparison makes sense given either could become someone’s general-purpose language for new projects.

                                                                                                        There’s fans of each on the site. So, I figured I throw it on here in case they wanted to play with a new language this weekend.

                                                                                                        1. 3

                                                                                                          Interesting. I’ve done some Nim in the past. Never considered it as a C++ replacement.

                                                                                                          1. 3

                                                                                                            They told me it had ability to do unsafe things, integrate well with C, and metaprogramming for customization. Anything with these can potentially be a C++ replacement. If I was misinformed about unsafe support, then it woudnt be as suited to this task. Unlike C++, it also spits out C.

                                                                                                      1. 3

                                                                                                        This looks really interesting but whooo I bet confluent is not happy about this

                                                                                                        1. 4

                                                                                                          They’re apparently publicly upset about this.

                                                                                                          “They’re profiting from our use of resources,” Kreps said. “We’re charging our customers for the service, paying for compute and network resources that we use.”

                                                                                                          https://twitter.com/kmassada/status/1068553502455037952

                                                                                                          Unfortunately I can’t find the original source.

                                                                                                        1. 2

                                                                                                          I didn’t watch to the end, but is there any movement on adding the pthread/thread local optimization to speed things up?

                                                                                                          1. 1

                                                                                                            Not that I am aware of