1. 5

The biggest issue with OO in my experience is the fact that objects are opaque state machines. The more objects you have in your program the more independent states you have to keep in your head when reasoning about it. Since objects can be referenced in many places, and their states can be interdependent the complexity of the system grows really fast.

I find that it’s pretty much impossible to do any local reasoning in OO projects. This becomes problematic when you’re working on large systems where you’re not familiar with all the parts of the code, and you never know if a particular change you’re making might affect some other code you don’t know via side effects in an unexpected way.

1. 11

I’ve used Clojure professionally for about 8 years now, and have trained many people to work with it during that time. I can firmly say that s-expressions are a fine, and have many tangible advantages over other forms of syntax.

One huge advantage of s-expression syntax is that it allows for powerful structural editors as seen here. Instead of having to work with lines of text, the editor works with the code semantically allowing you to manipulate expressions directly. Once you work this way for a bit, and it really doesn’t take long, you start thinking about code differently. You start seeing each s-expression as a lego block, and you just stick them together to build things.

I also find that s-expressions make the code more scannable. You basically have a diagram of the relationships in your code that you can analyze at a glance. In addition, the syntax is very regular and follows a few simple rules, further adding to readability.

Another big advantage of course is macros. When you’re using data structures to write code then you can use the language itself to manipulate the code. This is something that’s very cumbersome to do in any language that has separate syntax for logic and data.

I think the reason why people find s-expressions difficult to read is primarily due to the fact that they don’t share roots with any mainstream languages. If you’re an English speaker, then you’ll find French, German, or Spanish syntaxes familiar. Sure, you’ll have to learn a few rules, and some new words, but a lot of ideas are directly transferrable between them. However, if you try to learn Korean most of your existing knowledge won’t be easily transferrable. That should not be confused with Korean being inherently more difficult to learn or use however.

That said, I do think there are differences between of s-expression syntaxes that appeal to different people. Personally, I find that Clojure syntax has a number of features that greatly add readability. For example, having literal notation for data structures helps break things up visually:

(defn foo [a b]
{:result (+ a b)})


When I see a list I know the first element is what’s going to be called and the rest of the items are the parameters. When I see a vector I know it’s just a data literal for the argument, and I can immediately see that the function returns a map as its result.

Clojure also provides destructuring, where we can easily write out the parameters to a function:

(let [[smaller bigger] (split-with #(< % 5) (range 10))]
(println smaller bigger))

(print-user ["John" "397 King street, Toronto" "416-936-3218"])

(defn foo [{:keys [bar baz]}]
(println bar baz))

(foo {:bar "some value" :baz "another value"})


The threading macro is also extremely helpful in flattening out nested expressions:

 (reduce + (interpose 5 (map inc (range 10))))

when we use the threading macro it looks a lot more natural:

(->> (range 10) (map inc) (interpose 5) (reduce +))


I think little things like that really add up in terms of improving readability.

1. 1

The article makes a really important observation that’s often overlooked. It’s really important for code to express its intent clearly. The key to working with code effectively lies in your ability to understand its purpose. Static typing can have a big impact on the way we structure code, and in many cases it can actually obscure the bigger picture as the article illustrates.

1. 20

I don’t see how the article illustrates that. The article’s argument is that converting between serialization format and your business object requires maintaining some other kind of code and that costs something.

In my experience I’ve had other costs, which I found more expensive, in dealing with using one’s serialization layer as a business object:

1. The expected fields and what they are tend to not be well documented. Just convert the JSON and use it as a dict or whatever, but it’s hard to know what should be there for someone who didn’t write the code. With static types, even if one doesn’t document the values, they are there for me to see.
2. The semantics of my serialization layer may not match the semantics of my language and, more importantly, what I want to express in my business logic. For example, JSON’s lack of integers.
3. The serialization changes over versions of the software but there is a conversion to the business object that still makes sense, I can do that at the border and not affect the rest of my code.

The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

As a static type enthusiast, I do not feel that the conversion on the edges of my program distract from some bigger picture. I do feel that after the boundary of my program, I do not want to care what the user gave me, I just want to know that it is correct. If one’s language supports working directly with some converted JSON, fine.

On another note, I don’t understand the Hickey quote in this article. What is true of all cars that have driven off a road with a rumble strip? They went over the rumble strip. But cars drive off of roads without rumble strips too, so what’s the strength of that observation? Does the lack of a rumble strip some how make you a more alert driver? Do rumble strips have zero affect on accidents? I don’t know, but in terms of static types, nobody knows because the studies are really hard to do. This sort of rhetoric is just distracting and worthless. In my experience, the talk of static types and bugs is really not the strength. I find refactoring and maintenance to be the strength of types. I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers. I also find that that looking at the type of a function tends to tell me a lot about it. Not always, but often enough for me to find it valuable. That isn’t to say one shouldn’t document code but I find dynamically typed code tends to be as poorly documented as any other code so I value the types.

If one doesn’t share that experience and/or values, then you’ll disagree, and that’s fine. I’m not saying static types are objectively superior, just that I tend to find them superior. I have not found a program that I wanted to express that I preferred to express with a dynamic language. I say this as someone that spends a fair amount of time in both paradigms. The lack of studies showing strengths in one direction or another doesn’t mean dynamic types are superior to static or vice versa, it just means that one can’t say one way or the other and most of these blog posts are just echos of one’s experience and values. I believe this blog post is falling into the trap of trying to find some explanatory value in the author’s experience when in reality, it’s just the author’s experience. I wish these blog posts started with a big “In my experience, the following has been true….”

Disclaimer, while I think Java has some strengths in terms of typing, I really much prefer spending my time in Ocaml, which has a type system I much prefer and that is generally what I mean when I say “static types”. But I think in this comment, something like Java mostly applies a well.

1. 3

The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

My experience is that Clojure spec addresses the three points. However, Spec can live on the side as opposed to being mixed with the implementation, and it allows you to only specify/translate the types for the specific fields that you care about. I think this talk does a good job outlining the differences between the approaches.

On another note, I don’t understand the Hickey quote in this article.

What you ultimately care about is semantic correctness, but type systems can actually have a negative impact here. For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. Personally, I have a much time understanding that the following Python version is correct:

def insertionsort( aList ):
for i in range( 1, len( aList ) ):
tmp = aList[i]
k = i
while k > 0 and tmp < aList[k - 1]:
aList[k] = aList[k - 1]
k -= 1
aList[k] = tmp


I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers.

I’ve worked with static languages for about a decade. I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code. Anything that detracts from being able to tell that is a net negative. The above example contrasting Idris and Python is a perfect example of what I’m talking about.

Likewise, I don’t think that either approach is superior to the other. Both appear to work effectively in practice, and seem to appeal to different mindsets. I think that alone makes both type disciplines valuable.

It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors. So, the right language inevitably becomes the one that the team enjoys working with.

1. 7

That’s not a simple sort in Idris, it’s a formal, machine checked proof that the implemented function always sorts. Formal verification is a separate field from static typing and we shouldn’t conflate them.

For the record, the python code fails if you pass it a list of incomparables, while the Idris code will catch that at compile time.

I’m a fan of both dynamic typing and formal methods, but I don’t want to use misconceptions of the latter used to argue for the former.

1. 1

machine checked proof that the implemented function always sorts.

And if comparing spec vs test sizes apples to apples, that means we need a test for every possible combination of every value that function can take to be sure it will work for all of them. On 64-bit systems, that’s maybe 18,446,744,073,709,551,615 values per variable with a multiplying effect happening when they’re combined with potential program orderings or multiple variable inputs. It could take a fair amount of space to code all that in as tests with execution probably requiring quantum computers, quark-based FPGA’s, or something along those lines if tests must finish running in reasonable time. There’s not a supercomputer on Earth that could achieve with testing what assurance some verified compilers or runtimes got with formal verification of formal, static specifications.

Apples to apples, the formal specs with either runtime checks or verification are a lot smaller, faster, and cheaper for total correctness than tests.

2. 3

For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. (…) Personally, I have a much time understanding that the following Python version is correct: (snippet)

An actually honest comparison would include a formal proof of correctness of the Python snippet. Merely annotating your Python snippet with pre- and postcondition annotations (which, in the general case, is still a long way from actually producing a proof) would double its size. And this is ignoring the fact that many parts in your snippet have (partially) user-definable semantics, like the indexing operator and the len function. Properly accounting for these things can only make a proof of correctness longer.

That being said, that a proof of correctness of insertion sort takes 260 lines of code doesn’t speak very well of the language the proof is written in.

What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

Wait, the “intent”, rather than what the code actually does?

1. 1

An actually honest comparison would include a formal proof of correctness of the Python snippet.

That’s not a business requirement. The requirement is having a function that does what was intended. The bigger point you appear to have missed is that a complex formal specification is itself a program! What method do you use to verify that the specification is describing the intent accurately?

Wait, the “intent”, rather than what the code actually does?

Correct, these are two entirely different things. Verifying that the code does what was intended is often the difficult task when writing software. I don’t find static typing to provide a lot of assistance in that regard. In fact, I’d say that the Idris insertion sort implementation is actually working against this goal.

1. 4

The bigger point you appear to have missed is that a complex formal specification is itself a program!

This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

What method do you use to verify that the specification is describing the intent accurately?

Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

I don’t find static typing to provide a lot of assistance in that regard.

I agree, but with reservations. Types don’t write that many of my proofs for me, but, under certain reasonable assumptions, proving things rigorously about typed programs is easier than proving the same things equally rigorously about untyped programs.

1. 1

This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

A static type specification is a program plain and simple. In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete. The more things you try to encode formally the more complex this program becomes.

Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

So, how is this different from what people do when they’re writing specification tests?

1. 2

A static type specification is a program plain and simple.

Not any more than a (JavaScript-free) HTML document is a program for your browser to run.

In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete.

I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

So, how is this different from what people do when they’re writing specification tests?

I am not testing any code. I am testing whether my specification captures what the user wants. Then, as a completely separate step, I write a program that provably meets the specification.

1. 1

I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time. I find that runtime contracts like Clojure Spec are a perfectly fine alternative.

I am testing whether my specification captures what the user wants.

I’ve never seen that done effectively using static types myself, but perhaps you’re dealing with a very different domain from the ones I’ve worked in.

1. 1

At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time.

This is not a matter of “degree” or “comfort” or “taste”. Everything has to be proven statically, in the sense of “before the program runs”. However, not everything has to be proven or validated by a type checker. Sometimes directly using your brain is simpler and more effective.

I am testing whether my specification captures what the user wants.

I’ve never seen that done effectively using static types myself

Me neither. I just use the ability to see possibilities outside the “happy path”.

1. 1

However, not everything has to be proven or validated by a type checker.

I see that as a degree of comfort. You’re picking and choosing what aspects of the program you’re going to prove formally. The range is from having a total proof to having no proof at all.

Sometimes directly using your brain is simpler and more effective.

Right, and the part we disagree on is how much assistance we want from the language and in what form.

1. 1

You’re picking and choosing what aspects of the program you’re going to prove formally.

I’m not “picking” anything. I always prove rigorously that my programs meet their functional specifications. However, my proofs are meant for human rather than mechanical consumption, hence:

• Proofs cannot be too long.
• Proofs cannot demand simultaneous attention to more detail than I can handle.
• Abstractions are evaluated according to the extent to which they shorten proofs and compartmentalize details.

Right, and the part we disagree on is how much assistance we want from the language and in what form.

The best kind of “assistance” a general-purpose language can provide is having a clean semantics and getting out of the way when it can’t help. If you ever actually try to prove a program[0] correct, you will notice that:

• Proving that a control flow point is unreachable may require looking arbitrarily far back into the history of the computation. Hence, you want as few unreachable control flow points as possible, preferably none.

• Proving that a procedure call computes a result of interest requires making assumptions about the procedure’s precondition-postcondition pair. For second-class (statically dispatched) procedure calls, these assumptions can be discharged immediately—you know what procedure will be called. For first-class (dynamically dispatched) procedure calls, these assumptions may only be discharged in a very remote[1] part of your program. Hence, first-class procedures ought to be used sparingly.

[0] Actual programs, not just high-level algorithm descriptions that your programs allegedly implement.

[1] One way to alleviate the burden of communicating precondition-postcondition requirements between far away parts in a program is to systematically use so-called type class laws, but this is not a widely adopted solution.

1. 1

Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec. My experience is that this approach makes it much easier to express meaningful specifications. I’m also able to use it where it makes the most sense, which tends to be at the API level. I find there are benefits and trade-offs in both approaches in practice.

1. 1

Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec.

This is not a proof of correctness, so no.

I’ve already identified two things that don’t help: runtime checks and overusing first-class procedures. Runtime-checked contracts have the dubious honor of using the latter to implement the former in order to achieve nothing at all besides making my program marginally slower.

My experience is that this approach makes it much easier to express meaningful specifications.

I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

I’m also able to use it where it makes the most sense, which tends to be at the API level.

It makes the most sense before you even begin to write your program.

1. 2

This is not a proof of correctness, so no.

I think this is the key disconnect we have here. My goal is to produce working software for people to use, and writing a proof of correctness is a tool for achieving that. There are other viable tools that each have their pros and cons. My experience tells me that writing proofs of correctness is not the most effective way to achieve the goal of delivering working software on time. Your experience clearly differs from mine, and that’s perfectly fine.

I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

You clearly would not. However, there are plenty of reasons why other people prefer this. A few reasons of top of my head are the following. It’s much easier for most developers to read runtime contracts. This means that it’s easier to onboard people and train them. The contracts tend to be much simpler and more expressive. This makes it easier to read and understand them. They allow you to trivially express things that are hard to express at compile time. Contracts can be used selectively in places where they make the most sense. Contracts can be open while types are closed.

It makes the most sense before you even begin to write your program.

Again, we have a very divergent experience here. I find that in most situations I don’t know the shape of the data up front, and I don’t know what the solution is going to be ultimately. So, I interactively solve problems using a REPL integrated editor. I might start with a particular approach, scrap it, try something else, and so on. Once I settle on a way I want to do things, I’ll add a spec for the API.

Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me. At the same time, I’ve seen exactly zero empirical evidence to suggest that your approach is more effective in practice. Given that, I don’t think we’re going to gain anything more from this conversation. We’re both strongly convinced by our experience to use different tools and workflows. It’s highly unlikely that we’ll be changing each others minds here.

Cheers

1. 2

The contracts tend to be much simpler and more expressive. (…) Contracts can be open while types are closed.

I said “first-order logic”, not “types”. Logic allows you to express things that are impossible in a programming language, like “what is the distribution of outputs of this program when fed a sample of a stochastic process?”—which your glorified test case generator cannot generate.

Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me.

I’m not trying to convince you of anything either, but I honestly don’t think you have tried using mathematical logic. You might have tried, say, Haskell or Scala, and decided it’s not your thing, and that’s totally fine. But don’t conflate logic (which is external to any programming language) with type systems (which are often the central component of a programming language’s design, and certainly the most difficult one to change). It is either ignorant or dishonest.

2. 1

You can do more with a formal spec than a test. They can be used to generate equivalent tests for one like in EiffelStudio. Then, they can be used with formal methods tools, automated or full style, to prove they hold for all values. They can also be used to aid optimization by the compiler like in examples ranging from Common LISP to Strongtalk I gave you in other comment. There’s even been some work on natural language systems for formal specs which might be used to generate English descriptions one day.

So, one gets more ROI out of specs than tests alone.

1. 2

That’s why I find Clojure Spec to be a very useful tool. My experience is that runtime contracts are a better tool for creating specifications. Contracts focus on the usage semantics, while I find that types only have an indirect relationship with them.

At the same time, contracts are opt in, and can be created where they make the most sense. I find this happens to be at the boundaries between components. I typically want to focus on making sure that the API works as intended.

1. 2

Forgot to say thanks for Clojure spec link. That was a nice write-up. Good they added that to Clojure.

2. 1

“A static type specification is a program plain and simple. “

I dont think so but I could be wrong. My reading in formal specs showed they were usyally precise descriptions of what is to be done. The program is almost always a description of how to do something in a series of steps. Those are different things. Most formal specs arent executable on their own either since they’re too abstract.

There are formal specifications of how something is done in a concrete way that captures all its behaviors like with seL4. You could call those programs. The other stuff sounds different, though, since it’s about the what or too abstract to produce the result the programmer wants. So, I default on not a program with some exceptions.

1. 2

It’s a metaprogam that’s executed by the compiler with your program as the input. Obviously, you can have very trivial specifications that don’t really qualify as programs. However, you also have very complex specifications as well. There’s even a paper on implementing a type debugger for Scala. It’s hard to argue that specifications that need a debugger aren’t programs.

1. 1

I think this boils down to the definition of “program.” We may have different ones. Mine is an executable description of one or more steps that turn concrete inputs into concrete outputs optionally with state. A metaprogram can do that: it does it on program text/symbols. A formal specification usually can’t do that due to it being too abstract, non-executable, or having no concept of I/O. They are typically input too some program or embedded in one. I previously said some could especially in tooling like Isabelle or Prolog.

So, what is your definition of a program so I can test whether formal specs are programs or metaprograms against that definition? Also, out of curiosity too.

1. 2

My definition is that a program is a computational process that accepts some input, and produces some output. In case of the type system, it accepts the source code as its input, and decides whether it matches the specified constraints.

1. 1

Well, I could see that. It’s an abstract equivalent to mine it looks like. I’ll hold of debating that until I’m more certain of what definition I want to go with.

3. 3

That’s not a business requirement. The requirement is having a function that does what was intended

You’re comparing two separate things, though! The Idris isa proven correct function, while the python is just a regular function. If all the business wants is a “probably correct” function, the Idris code would be just a few lines, too.

1. 1

The point here is that formalism does not appear to help ensure the code is doing what’s intended. You have to be confident that you’re proving the right thing. The more complex your proof is, the harder it becomes to definitively say that it is correct. Using less formalism in Python or Idris results in code where it’s easier for the human reader to tell the intent.

1. 4

You can say your proof is correct because you have a machine check it for you. Empirically, we see that formally verified systems are less buggy than unverified systems.

1. 2

Machine can’t check that you’re proving what was intended. A human has to understand the proof and determine that it matches their intent.

1. 2

A human had to check the intent (validate) either way. A machine can check the implementation is correct (verify).

Like in the Idris, I have to validate that the function is supposed to sort. Once I know that’s the intention, I can be assured that it does, in fact, sort, because I have a proof.

1. 1

What I’m saying is that you have to understand the proof, and that can be hard to do with complex proofs. Meanwhile, other methods such as runtime contracts or even tests, are often easier to understand.

1. 1

That’s not how formal verification works, though. Let’s say my intention is “sort a list.” I write a function that I think does this. Then I write a formal specification, like “\A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k]”. Finally, I write the proof.

I need to validate that said specification is what I want. But the machine can verify the function matches the specification, because it can examine the proof.

1. 1

The fact that we’re talking past each other is a good illustration of the problem I’m trying to convey here. I’m talking about the human reader having to understand specifications like \A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k], only much longer. It’s easy to misread a symbol, and misinterpret what is being specified.

1. 2

You did say “have to understand the proof” (not specification) before. I strongly agree with the latter - the language we use for writing specs can easily get so complex that the specs are more error-prone than their subjects.

I once wrote a SymEx tool for PLCs and found that specs in my initial choice of property language (LTL) were much harder to get right than the PLC code itself. I then looked at the properties that would likely need to be expressed, and cut the spec language down to a few higher-level primitives. This actually helped a lot.

Even if restricting the property language isn’t an option, having a standard library (or package ecosystem) of properties would probably get us rather close - so instead of \A j, k \in ... we could write Sorted(s) and trust the stdlib / package definition of Sorted to do its name justice.

2. 4

For example, here’s insertion sort implemented in Idris, it’s 260 lines of code

When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system. There is plenty of Python golf code out there too that one could use as an example to make a point. But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one. But if the Idris one were the production implementation, I would rather spend time understanding it than the Python sort function.

It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors.

That is likely true, IMO. I think it’s interesting that one could replace type system in the Hickey quote with “testing” or “code review” and the statement would still be true, but people seem to zero in on types. No-one serious says that we shouldn’t have testing because we still have bugs in software.

I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code. In Ocaml I’d just hit C-t and it’d tell me what it is. I find this to be a common pattern in my life as I have tended to be part of the clean-up crew in projects lately. The intent of that code is pretty obvious, but that doesn’t help me much for the refactoring I’m doing. But experiences vary.

1. 2

When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system.

The point still stands though, the more properties you try to encode formally the more baroque the code gets. Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one.

The sort implementation in Python handles many different kinds of sorts. If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

No-one serious says that we shouldn’t have testing because we still have bugs in software.

People argue regarding what kind of testing is necessary or useful all the time though. Ultimately, the goal is to have a semantic specification, and to be able to tell that your code conforms to it. Testing is one of the few known effective methods for doing that. This is why some form of testing is needed whether you use static typing or not. To put it another way, testing simply isn’t optional for serious projects. Meanwhile, many large projects are successfully developed in dynamic languages just fine.

My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code.

In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL and see what a monitor looks like. My team has been working with Clojure for over 8 years now, and I often end up working with code I’m not familiar with.

1. 3

Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

At the moment, yes. I am not a type theorist but as far as I have seen, dependent types are not at the point where we know how to use them in effectively in a production setting yet. But I do make pretty heavy use of types elsewhere in codebases I work on when possible and try to encode what invariants I can in them when possible (which is pretty often).

If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

Maybe! I don’t actually know. The types in the Idris implementation might be sufficient to get very performant code out of it (although I doubt it at this point).

In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL …

I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

1. 2

I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

There is a fundamental difference in workflows here. With Clojure, I always work against a live running system. The REPL runs within the actual application runtime, and it’s not restricted to my local machine. I can connect a REPL to an application in production, and inspect anything I want there. In fact, I have done just that on many occasions.

This is indeed the power of dynamic types at work. Everything is live, inspectable, and reloadable. The reality is that your application will need to interact with the outside world you have no control over. You simply can’t predict everything that could happen at runtime during compile time. Services go down, APIs change, and so on. When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

1. 1

That sounds like a good argument for manipulating something at runtime, but not dynamic types. You can build statically-typed platforms that allow runtime inspection or modification. The changes will just be type-checked before being uploaded. The description of StrongTalk comes to mind.

1. 2

Static type systems are typically global, and this places a lot of restrictions on what can be modified at runtime. With a dynamic language you can change any aspect of the running application, while arbitrary eval is problematic for static type systems.

2. 1

When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

There are architectural choices that address this point, in most situations, better IME. That is, standard setup of load balancer for application servers and something like CARP on the load balancers. For street cred, I’ve worked as an Erlang developer.

1. 1

Sure, you can work around that by adding a lot of complexity to your infrastructure. That doesn’t change the fact that it is a limitation.

1. 1

In my experience, if uptime is really important, the architecture I’m referring to is required anyways to deal with all the forms of failure other than just the code having a bug in it. So, again in my experience, while I agree it is a limitation, it is overall simpler. But this whole static vs dynamic thing is about people willing to accept some limitations for other, perceived, benefits.

1. 1

My experience is that it very much depends. I’ve worked on many different projects, and sometimes such infrastructure was the right solution, and in others it was not. For example, consider the case of the NASA Deep Space 1 mission.

1. 2

I’m not sure how Deep Space 1 suits the point you’re making. Remote Agent on DS1 was mostly formally verified (using SPIN, I believe) and the bug was in the piece of code that was not formally verified.

1. 1

The point is that it was possible to fix tis bug at runtime in a system that could not be load balanced or restarted. In practice, you don’t control the environment, and you simply cannot account for everything that can go wrong at compile time. Maybe your chip gets hit by a cosmic ray, maybe a remote sensor gets damaged, maybe a service you rely on goes down. Being able to change code at runtime is extremely valuable in many situations.

1. 1

The things you listed are accountable for at build time. Certainly NASA doesn’t send chips that are not radiation hardened into space saying “we can just remote debug it”. Sensors getting damaged is expected and expecting services one relies on going down is table stakes for a distributed system. And while I find NASA examples really cool, I do not find them compelling. NASA does a lot of things that a vast majority of developers don’t and probably shouldn’t do. Remember, NASA also formally verifies some of their software components, but you aren’t advocating for that, which makes the NASA example confusing as to which lesson one is supposed to take from it. And those cosmic rays are just as likely to bring down one’s remote debugging facility as it is to break the system’s other components.

1. 1

I think you’re fixating too much on NASA here. The example is just an illustration of the power of having a reloadable system. There are plenty of situations where you’re not NASA and this is an extremely useful feature. If you can’t see the value in it I really don’t know what else to say here really.

1. 1

I’m responding to the example you gave, if you have other examples that are more compelling then I would have expected you to post that.

1. 1

What’s compelling is in in the eye of the beholder. It’s pretty clear that there’s nothing I can say that you will find convincing. Much like I’m not convinced by your position.

1. 8

Seems like the old MS strategy of embrace, extend, and extinguish. This already happened with Google Talk that was using Jabber, then got rebranded as Hangouts with a proprietary protocol. Open standards are the only thing that makes the internet possible, and every large company is trying to find a way to create its own walled gardens to lock in the users.

1. -2

You do realize that AMP is entirely standards based right? My sarcasm detector misfires sometimes.

1. 13

What standard? It isnt standard html.

Inventing some custom tags and a forced-down-everyones-throat js renderer for said tags, a standard does not make.

The amp project website is registered to Google, and realistically the “project” is controlled by google.

If you think anything google does re: AMP is anything but a massive power grab for even more control over the web, you’re incredibly naive.

1. 12

Something is an open standard if everyone can have input in it, and if they csn entirely self-host it and gain all the benefits.

If I self-host AMP entirely, the requirement (loading the framework from Google’s server) is not met anymore, same if I ban the Google AMP cache and use my own. In both cases, I will not get any of the AMP search benefits, and Google will declare my AMP invalid.

Valid AMP is, by definition, a proprietary product.

1. 3

The problem is that Google is changing the nature of email. Going back to GTalk example, Google started with an open standard, and then kept updating the protocol until it became mostly incompatible with third party clients.

1. 4

I wonder how much role interactivity plays here. Scheme and Smalltalk are both deeply interactive environments where you can have a conversation with the language. This makes learning the language both enjoyable and engaging.

1. 39

Perhaps build systems should not rely on URLs pointing to the same thing to do a build? I don’t see Github as being at fault here, it was not designed to provide deterministic build dependencies.

1. 13

Right, GitHub isn’t a dependency management system. Meanwhile, Git provides very few guarantees regarding preserving history in a repository. If you are going to build a dependency management system on top of GitHub, at the very least use commit hashes or tags explicitly to pin the artifacts you’re pulling. It won’t solve the problem of them being deleted, but at least you’ll know that something changed from under you. Also, you really should have a local mirror of artifacts that you control for any serious development.

1. 6

I think the Go build system issue is a secondary concern.

This same problem would impact existing git checkouts just as much, no? If a user and a repository disappear, and someone had a working checkout from said repository of master:HEAD, they could “silently” recreate the account and reconstruct the repository with the master branch from their checkout… then do whatever they want with the code moving forward. A user doing a git pull to fetch the latest master, may never notice anything changed.

This seems like a non-imaginary problem to me.

1. 11

I sign my git commits with my GPG key, if you trust my GPG key and verify it before using the code you pulled - that would save you from using code from a party you do not trust.

I think the trend of tools pulling code directly from Github at build time is the problem. Vendor your build dependencies, verify signatures etc. This specific issue should not be blamed directly on Github alone.

1. 3

Doesn’t that assume that the GitHub repository owner is also the (only) committer? It’s unlikely that I will be in a position to trust (except blindly) the GPG key of every committer to a reasonably large project.

If I successfully path-squat a well-known GitHub URL, I can put the original Git repo there, complete with GPG-signed commits by the original authors, but it only takes a single additional commit (which I could also GPG-sign, of course) by the attacker (me) to introduce a backdoor. Does anyone really check that there are no new committers every time they pull changes?

1. 3

Tags can be GPG signed. This proves all that all commits before the tag is what the person signed. That way you only need to check the people assigned to signing the tagged releases.

2. [Comment removed by author]

1. 2

Seriously, if only GitHub would get their act together and switch to https, this whole issue wouldn’t have happened!

1. 4

I must have written this post drunk.

1. 2

How long before this gets packaged as an Electron app. ;)

1. 2

I really enjoyed these Rich Hickeys talks, as well as Clojure, Made Simple. I find these talks really resonate with my experience working on large projects, and do a great job articulating typical pain points in software development as well as outlining ways of addressing them.

1. 16

I took a job at a hospital a number of years ago building applications for internal clinical use. I get to talk to physicians, see how they’re using my applications, and how they improve outcomes for the patients. Knowing that the apps I work on directly affect patient care makes the work feel a lot more meaningful than any other job I’ve done previously.

1. 5

Been thinking about OP’s question and people who work close to the medical field for some time. Glad to know it’s fulfilling.

1. 7

I just left a small startup in the Rochester, NY area called Bryx (https://bryx.com), and we’ve gotten some incredible feedback from people saying how much easier their lives are that they don’t have to sit, blocked, waiting for pagers and faxes to be able to get routed to fires and EMS calls.

It’s incredibly fulfilling.

1. 3

Awesome. The feedbacks help a ton.

1. 3

That sounds amazing. Would you mind going into a bit more detail about Bryx? If you’re allowed to say, what sort of technology/languages/stack does it use?

1. 1

Just saw this reply. The API is entirely implemented in Kotlin, the backend receiving jobs from departments and putting them into our (for better or worse) MongoDB instance is all Python. The Android app is a mix of Kotlin and Java, and the iOS app is entirely Swift. We have a desktop Electron app and a management site that are written in TypeScript. We’re really big fans of the new developments in programming languages and we’re huge fans of type safety. Specifically, we like type safety because our old PHP API caused so, so many bugs in production from accidentally misspelling variables and a lack of enforced structure.

1. 2

Thank you, that’s really interesting.

2. 3

Please could you describe the applications a bit more? In particular, what sort of languages/software stack/environment do you use? I have occasionally thought about doing something similar - mostly when I get depressed about working for morally dubious people - but my skills and experience never seem to be a good fit.

1. 2

I actually did an interview about my work recently here.

1. 2

Cool, thank you.

1. 2

bad performance on low-end devices (and I suspect higher battery consumption, but can’t really proof this one)

I’d actually argue the opposite here. With a traditional web app you’re sending HTML across, and you’re doing a lot of parsing each time a page loads. Parsing HTML is an expensive operation. With SPA style apps, you load the page once and pass JSON around containing just the data that needs to be loaded. So, after initial load, you should expect to get better resource utilization.

1. 6

I’m not sure that parsing HTML is as expensive as parsing (and compiling) Javascript though. Of course you’d pay a high price at each request of an e-commerce web app, but if you want to read an article on some blog, it is faster when you don’t have to load all of Medium’s JS app.

Browser vendors are trying really hard to fasten the startup time of their VM but the consensus is that to get to Interactive fast, you should ship less JS or at least less JS upfront.

1. 1

Parsing XML is notoriously expensive. In fact, it’s one of the rationales behind Google’s protocol buffers. Furthermore, even if the cost of parsing XML and JSON was comparable, you’d still be sending a lot more XML if you’re sending a whole page. Then that XML has to be rendered in the DOM, which is another extremely expensive operation.

To sum up, only pulling the data you actually need, and being able to repaint just the elements that need repainting is much faster than sending the whole page over and repainting it on each update.

1. 3

The problem is that incremental rendering is often paired with a CPU intensive event listener and digest loops and other crud causing massive amounts of Javascript for every click and scroll.

1. 1

That’s not an inherent problem with SPAs though, that’s just a matter of having a good architecture. My team has been building complex apps using this approach for a few years now, and it results in a much smoother user experience than anything we’ve done with traditional server-side rendering.

2. 4

This seems like the exact kind of thing we can empirically verify. Do you know of any good comparisons?

1. 1

I haven’t seen any serious comparisons of the approaches. It does seem like you could come up with some tests to compare different operations like rendering large lists, etc.

2. 2

I’m not so sure, a modern HTML parser is fairly efficient. On top of that, a lot of stuff is cached in a modern browser.

My blog usually transfers in under 3 KB if you haven’t cached the page, around 800 B otherwise (which includes 800 bytes from Isso). My website uses less than 100KB, most of which is highly compressed pictures.

Most visitors only view one page an leave so any SPA would have to match load performance with the 3 KB of HTML + CSS or the 4KB of HTML+CSS plus 100KB of images…

A similar comparison would be required for any traditional server-side rendering application; if you want to do it in SPA, it should first match (or atleast come close to) the performance of the current server for the typical end user.

SPAs are probably worth thinking about if the user spends more than a dozen pages on your website during a single visit and even then it could be argued that with proper caching and not bloating the pages, the caching would make up a lot of performance gains.

Lastly, non-SPA websites have working hyperlink behaviour.

1. 1

I think that if your site primarily has static content, then server side approach makes the most sense. Serving documents is what it was designed for after all. However, if you’re making an app, something like Slack or Gmail, then you have a lot of content that will be loaded dynamically in response to user actions. Reloading the whole page to accommodate that isn’t a practical approach in my opinion.

Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.

1. 1

Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.

That’s how it would work in theory, however, 9/10 SPAs I meet don’t do this. The URL of the page is always the same, reloading looses any progress and I can’t even open links in new tabs at all or even if I can, it just opens the app on whatever default page it has.

Even with user content being loaded dynamically, I would considering writing a server app unless there will be, as mentioned, a performance impact for the typical user.

1. 1

That’s a problem with the specific apps, and not with the SPA approach in general though. Moving this logic to the server doesn’t obviate the need for setting up sane routing.

1. 1

I’ve sadly seen SPA done correctly only rarely, it’s the exception rather than the rule in my experience.

So I’m not convinced it would be worth it, also again, I’m merely suggesting that if you write an SPA, it should be matching a server-side app’s performance for typical use cases.

1. 1

I agree that SPAs need to be written properly, but that’s just as true for traditional apps. Perhaps what you’re seeing is that people have a lot more experience writing traditional apps, and thus better results are more common. However, there’s absolutely nothing inherent about SPAs that prevents them from being performant.

I’ve certainly found that from development perspective it’s much easier to write and maintain complex UIs using the SPA style as opposed to server-side rendering. So, I definitely think it’s worth it in the long run.

1. 1

I’ve built enough apps both ways now to feel confident weighing in.

If you build a SPA, your best case first impression suffers (parsing stutters etc), but complex client side interaction becomes easy (and you can make it look fast because you know which parts of the page might change).

I no longer like that tradeoff much; I find too few sites really need the rich interactivity (simple interaction is better handled with jquery snippets), and it’s easier to make your site fast when there are fewer moving parts.

This might change as the tooling settles down; eg webpack is getting easier to configure right.

1. 2

The tooling for Js is absolutely crazy in my opinion. There are many different tools you need to juggle, and they’re continuously change from under you. I work with ClojureScript, and it’s a breath of fresh air in that regard. You have a single tool for managing dependencies, building, testing, minifying, and packaging the app. You also get hot code loading out of the box, so any changes you make in code are reflected on the page without having to reload it. I ran a workshop on building a simple SPA style app with ClojureScript. It illustrates the development process, and the tooling.

1. 6

I wish the browser wars meant we got some more variety rather than more of the same. We are getting boxed in between two vendors (three if you count webkit/safari).

While I understand everyone wants their browser to be snappy, and speed perception drives user adoption, I have other priorities.

• I’d like the browser to help me with usability using larger fonts or disable some effects (gradients and low contrast are the new blink).
• Videos sometimes dont play at all, or have choppy sound. But the native video players in my system can play the same stream just fine. Why can’t I just outsource playback to the OS?
• input handling in the browser always defers to the web page. Sometimes I just want to scroll the page or paste on an input field - but the webpage defined some bindings that prevent me from doing it. I tried to hack around this with some webkitgtk code, but even then I was not 100% successful (lets face it I want normal mode in my browser)

I’m savvy enough to have a long list of hacks to do some of this stuff. But it seems to be getting harder to do it. I consider Firefox to be the most configurable of the two, but each release breaks something or adds some annoyance that breaks something else. Currently I’m seriously pondering switching from firefox to chromium because alsa does not work with the new sandbox.

The wide scope of browser APIs means they are more like full operating systems than single applications. In fact I think my laptop lacks the disk/ram to build chrome from source. Webkit is likely the most hackable of the bunch, but then again I have no experience with CEF. It seems likely that the major browsers will continue to converge until they become more or less the same, unless some other player steps up.

1. 10

Firefox is introducing support for decentralized protocols in FF 59. The white-listed protocols are:

• Dat Project (dat://)
• IPFS (dweb:// ipfs:// ipns://)
• Secure Scuttlebutt (ssb://)

I think that’s moving things in an interesting direction as opposed to doing more of the same.

1. 7

Hey! I made that patch! :-D

so basically the explanation is simple. There is a whitelist of protocols you can have your WebExtension take over.

If the protocol you want to control is not on that whitelist such as an hypothetical “catgifs:” protocol, you need to prefix it like: “web+catgifs” or “ext+catgifs” depending if it will be used from the Add-on or by redirection to another Web page. This makes it inconvenient to use with lots of decentralization protocols because in many other clients we are already using urls such as “ssb:” and “dat:” (eg, check out beaker browser). In essence this allows us to implement many new cool decentralization features as add-ons now that we can take over protocols, so, you could be in Firefox browsing the normal web and suddenly see a “dat:” link, normally you’d need to switch clients to a dat enabled app, now, you can have an add-on display that content in the current user-agent you’re using.

Still, there is another feature that we need before we can start really implementing decentralization protocols as pure WebExtensions, we need both TCP and UDP APIs like we had in Firefox OS (as an example, Scuttlebutt uses UDP to find peers in LAN and its own muxrpc TCP protocol to exchange data, DAT also uses UDP/TCP instead of HTTP).

I have been building little experiments in Firefox for interfacing with Scuttlebutt which can be seen at:

I hope to start a conversation in the add-ons list about TCP and UDP APIs for WebExtensions soon :-)

1. 1

Fantastic work! :)

2. 2

Well, on Windows you have a 3rd option: IE.

1. 22

I think the Fediverse (GNU Social / Mastadon) is the decentralized social network with the closest to critical mass so far. You can either join a node run by someone you trust, or run your own. If by privacy you mean you don’t want the server operator collecting usage profiles on you and so on, that might be a good fit. Messages are still usually public though.

If you want messages private and not easily scrapeable on the public internet, the Fediverse isn’t quite as good of a fit, although there are ways to do it with GNU Social at least, by setting up private groups on a single node. But that’s not as widely used, and you’d have to get your social group all on a GNU Social node. Another approach might be to make an ad-hoc Signal group (you can add multiple people to a group-messaging session). Messages there are end-to-end encrypted and it’s relatively easy to get people on Signal (many people I know are already on it), but messages do go via a centralized server infrastructure, so Signal can collect metadata even if not the message contents (I think I probably trust them more than I trust Facebook/Twitter, but still).

1. 2

I used Mastodon a few months back. I found its feature set closer to Twitter than Facebook; to me it’s a good platform for seeing news, cool tech trends, or following people I respect in the tech world. Not so much for staying in contact with real-world friends, organizing outings, etc.

I also find the “join a node run by someone you trust” to be a barrier to joining. My non-tech friends probably will as well.

I really don’t mind the server being centralized. As long as (1) the company has a clear mission statement and has not given reason to doubt that mission, and (2) has a zero-knowledge infrastructure (I think metadata is ok), I am happy to use it.

1. 9

One option with Mastodon is to just run your own private instance for friends. It’s pretty easy to setup, and will run fine on a \$5 a month Linode instance.

1. 7

Maybe give Diaspora a try; same basic ideas, but I believe is closer to facebook than twitter.

Edit: I think the idea of not minding a centralized solution and only using a company you can trust are sort of opposed. If your solution is centralized and you lose trust in the company, you’re stuck with abandoning the network or your principles. If you’re using a federated network then you can still use the network without supporting that specific company (node/instance). And I’m not sure what’s so hard about trusting an individual/small group of people instead of a company.

1. 3

Agreed on the last part. Companies are composed of constantly changing and potentially large groups of people all hidden behind a shallow corporate identity.

Trusting individuals makes a lot more sense to me.

1. 7

If you haven’t tried Firefox for Android, I highly recommend checking it out. I find it’s much snappier than Chrome, has adblock, and a better UI in my opinion. For example, tab management is a lot saner. It also provides an option to load links in the background if you open them from a different app.

1. 3

It also provides an option to load links in the background if you open them from a different app.

I switched this on by accident at some point and I find it’s wonderful.

1. 12

I recently switched from Chrome to Firefox, due to some annoyances and problems I’ve been encountering in Chrome. For example, the entire Chrome UI goes unresponsive frequently when I use the IRCCloud web application. And Chrome is woefully behind when it comes to stopping autoplay videos from playing. This should have been built-in functionality for a long time now - there’s just no excuse for it not being there. And using Ctrl+click to put individual Google Hangouts conversations into their own tabs is now causing problems too (after a while, the tab goes blank!). This is a brand new problem I’m seeing in Chrome 64.

Not sure what’s happening with the Chrome team. They seem to have lost their mojo.

The Firefox UI is now nice and smooth. And Firefox is fast. I can no longer perceive any performance differences between Chrome and Firefox. I think the two browsers are on par again – and I would even give Firefox the edge at the moment.

1. 12

I’ve always like Firefox better ideologically, but I just couldn’t deal with the reduced performance. I tried FF 57 beta, and never looked back. I’m really glad Mozilla finally managed to put out serious competition to Chrome.

1. 2

Can’t say that’s been my experience, sadly. I switched from Chrome to Firefox and was having a blast with Quantum, I never thought web pages could load that quickly! But the UI was still clunky and the whole browser froze several times while using it with more intensive apps (and it really makes me wonder what went wrong when with several web tools open the one to really bring my computer to its knees is Slack, an IM app).

I know a friend that’s using it and he’s not having my issues, wonder if it’s Windows (since he uses macOS), the fact that my PC is older (Thinkpad T430 vs MBP 2017) but Chrome just never skips a beat, even when the website is under heavy load, while Firefox just gives up when any of the pages get a little busy (this includes mundane tasks like opening and browsing the inspector).

I really think Mozilla is on the right track for once and I hope for the best to them, I just hope they don’t get blinded by trying to ace every benchmark and think of the overall experience more, otherwise they’ll end up doing the same things we hate Chrome for (like breaking the web)

1. 5

I had high hopes for this thing. But then it starts dubbing things “functional” or “reactive” without even explaining what that means.

It does not give an overview of what these frameworks are supposed to achieve in general, nor an overview of their architecture.

It’s not useful at all. This, while still lacking, at least gives some useful information: https://mithril.js.org/framework-comparison.html

1. 2

Moreover, ReasonML is listed in frameworks, but it’s a programming language. (Elm is language too, but at least it contains JS framework in its standard library).

1. 1

Indeed, and it would be nice to start seeing more people include compile to Js languages as well. For example, I thought this comparison was interesting https://medium.freecodecamp.org/a-real-world-comparison-of-front-end-frameworks-with-benchmarks-e1cb62fd526c

1. 1

I’d be interested. :)

1. 2

Personally, I really like the approach of null punning. You just bubble null values up the call chain and let the user handle them. This avoids having to pepper checks all over the code which is error prone in languages where the checks are optional, and noisy in those that enforce them. In vast majority of cases I find that I’ll have a series of computations I want to do on a piece of data, and I only care whether it’s null at the start or the end of that chain. This is a good longer write up on the approach.

1. 4

and noisy in those that enforce them

It’s not if your language has decent support for them - you can easily do the ‘bubbling up’. And it’s not very often you need things to be nillable, so any cost is mitigated.

1. 3

This is the Objective-C approach, as well, and it’s very nice, once you get used to it.

1. 2

It’s horrible in cases where null isn’t actually a valid value; you get the “cannot read property of undefined” problem where you find out about a failure three modules and two thousands lines away from the actual code problem (often in someone else’s code) and don’t have enough information to find out more. Much better to make invalid states unrepresentable and fail fast rather than going into an invalid state.

1. 10

Some of us miss native desktop applications that worked well. It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app. But at the same time not everyone is satisfied with the solution of “build them all as electron apps starting with a cross-platform browser base plus web technology for the UI”. I can sympathize with app developers who in no way want to sign up to build for 2 or 3 platforms, but I feel like berating dissatisfied users is unjust here. Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

1. 8

I think people are just tired of seeing posts like Electron is cancer every other day. Electron is here, people use it, and it solves a real problem. It would be much more productive to talk about how it can be improved in terms of performance and resource usage at this point.

1. 3

One wonders if it really can be improved all that much. It seems like the basic model has a lot of overhead that’s pretty much baked in.

1. 2

There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead. I’m certain this is a research / marketing / exposure problem more than a technical one (in that there has to be something that would work better we just don’t know about because it’s sitting unloved in a repo with 3 watchers somewhere.)

Cheers!

1. 2

There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead.

Is there? Electron’s popularity seems like it’s heavily dependent on the proposition “re-use your HTML/CSS and JS from your web app’s front-end” rather than on “here’s a cross-platform app runtime”. We’ve had the latter forever, and they’ve never been that popular.

I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

1. 1

But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

There’s a lot more to “actual browser” than a JS runtime, DOM and canvas: does an application platform need to support all the media codecs and image formats, including all the DRM stuff? Does it need always on, compiled in built-in OpenGL contexts and networking and legacy CSS support, etc.?

I’d argue that “re-use your HTML/CSS/JS skills and understanding” is the thing that makes Electron popular, more so than “re-use your existing front end code”, and we might get a lot further pushing on that while jettisoning webkit than arguing that everything needs to be siloed to the App Store (or Windows Marketplace, or whatever).

1. 2

But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

“Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

1. 1

Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

“Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

A couple things:

1. Literally no one in this thread up til now has mentioned any of Slack/Twitch/Todoist.
2. “Electron is bad because some teams don’t expend the effort to make good apps” is not my favorite argument.

I think it’s disingenous to say “there can be no value to this platform because people write bad apps with it.”

There are plenty of pretty good or better apps, as you say: Discord, VSCode, Atom with caveats.

And there are plenty of bad apps that are native: I mean, how many shitty apps are in the Windows Marketplace? Those are all written “native”. How full is the App Store of desktop apps that are poorly designed and implemented, despite being written in Swift?

Is the web bad because lots of people write web apps that don’t work very well?

I’m trying to make the case that there’s value to Electron, despite (or possibly due to!) it’s “not-nativeness”, not defending applications which, I agree, don’t really justify their own existence.

Tools don’t kill people.

2. 1

we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end.

I’m really not an expert in the matter, just genuinely curious from my ignorance: why not? If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well? I always wondered why there is no such thing as an Atom webapp. Is it because it would take too long to load? The logic and frontend are already there.

1. 2

I’m referring to Atom, Hyper, Visual Studio Code, etc. here specifically.

I don’t think there’s any problem with bringing your front end to desktop via something like Electron. I do it at work with CEFSharp in Windows to support a USB peripheral in our frontend.

If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well?

I think the goal with the web platform is that you could - see APIs for device access, workers, etc. At the moment, platforms like Electron exist to allow native access to things you couldn’t have otherwise, that feels like a implementation detail to me, and may not be the case forever.

no such thing as an Atom webapp

https://aws.amazon.com/cloud9/

These things exist, the browser is just a not great place for them currently, because of the restrictions we have to put on things for security, performance, etc. But getting to that point is one view of forward progress, and one that I ascribe to.

2. 1

I can think of a number of things that could be done off top of my head. For example, the runtime could be modularized. This would allow only loading parts that are relevant to a specific application. Another thing that can be done is to share the runtime between applications. I’m sure there are plenty of other things that can be done. At the same time, a lot can be done in applications themselves. The recent post on Atom development blog documents a slew of optimizations and improvements.

2. 4

It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

It’s a necessarily sacrifice if you want apps that are and feel truly native that belong on the platform; a cross-platform Qt or (worse) Swing app is better than Electron, but still inferior to the app with a UI designed specifically for the platform and its ideals, HIG, etc.

1. 1

If we were talking about, say, a watch vs a VR system, then I understand “the necessary sacrifice” - the two platforms hardly have anything in common in terms of user interface. But desktops? Most people probably can’t even tell the difference between them! The desktop platforms are extremely close to each other in terms of UI, so I agree that it’s tragic to keep writing the same thing over and over.

I think it’s an example of insane inefficiency inherent in a system based on competition (in this case, between OS vendors), but that’s a whole different rabbit hole.

1. 2

I am not a UX person and spend most of my time in a Terminal, Emacs and Firefox, but I don’t think modern GUIs on Linux (Gnome), OS X and Windows are too common. All of them have windows and a bunch of similar widgets, but the conventions what goes where can be quite different. That most people can’t tell, does not mean much because most people can’t tell the difference between a native app and an electron one either. They just feel the difference if you put them on another platform. Just look how disoriented many pro users are if you give them a machine with one of the other major systems.

1. 1

I run Window Maker. I love focus-follows-mouse, where a window can be focused without being on top, which is anathema to MacOS (or macOS or whatever the not-iOS is called this week) and not possible in Windows, either. My point is, there are enough little things (except focus-follows-mouse is hardly little if that’s what you’re used to) which you can’t paper over and say “good enough” if you want it to be good enough.

2. 2

It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

There is a huge middle ground between shipping a web browser and duplicating code. Unfortunately that requires people to acknowledge something they’ve spent alot of time working to ignore.

Basically c is very cross platform. This is heresy but true. I’m actually curious: can anyone name a platform where python or javascript run where c doesn’t run?

UI libraries don’t need to be 100% of your app. If you hire a couple software engineers they can show you how to create business logic interfaces that are separate from the core services provided by the app. Most of your app does not have to be UI toolkit specific logic for displaying buttons and windows.

Source: was on a team that shipped cross platform caching/network filesystem. It was a few years back, but the portion of our code that had to vary between linux/osx/windows was not that big. Also writing in c opened the door for shared business logic (api client code) on osx/linux/windows/ios/android.

Electron works because the web technologies have a low bar to entry. That’s not always a bad thing. I’m not trying to be a troll and say web developers aren’t real developers, but in my experience, as someone who started out as a web developer, there’s alot of really bad ones because you start your path with a bit of html and some copy-pasted javascript from the web.

1. 1

There’s nothing heretical about saying C is cross-platform. It’s also too much work for too little gain when it comes to GUI applications most of the time. C is a systems programming language, for software which must run at machine speed and/or interface with low-level machine components. Writing the UI in C is a bad move unless it’s absolutely forced on you by speed constraints.

2. 1

It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

++ Yes!

Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

I think it’s easy to shit on poorly made Electron apps, but I think the promise of crossplatform UI - especially for tools like Atom or Hyper, where “native feeling” UI is less of a goal - is much too great to allow us to be thrown back to “only Windows users get this”, even if it is “only OS X users get this” now.

It’s a tricky balancing act, but as a desktop Linux user with no plans to go back, I hope that we don’t give up on it just because it takes more work.

Cheers!

PS: Thanks for the invite, cross posted my email response if that’s ok :)

1. 2

Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

My personal view is that nothing is truly well designed if it doesn’t play well and fit in with other applications on the system. Fantastical is very well designed, and an integral part of that great design is that it effortlessly fits in with everything else on the platform.

“Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

1. 1

“Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

Have to agree to disagree here, I guess. I definitely can believe that there can be well-designed, not-native application experinces, but I think that depends on the success and ‘well-designed-ness’ of the platform you’re talking about.

As part of necessary background context, I run Linux on my laptop, with a WM (i3) rather than a full desktop manager, because I really didn’t like the design and cohesiveness of Gnome and KDE the last time I tried a full suite. Many, many apps that could have been well designed if they weren’t pushed into a framework that didn’t fit them.

I look at Tomboy vs. Evernote as a good example. Tomboy is certainly well integrated, and feels very native in a Gnome desktop, and yet if put next to each other, Evernote is going to get the “well-designed” cred, despite not feeling native on really any platform it’s on.

Sublime Text isn’t “native” to any of the platforms it runs on either.

Anyway, I feel like I’m losing the thread of discussion, and I don’t want to turn this into “App A is better than App B”, so I’ll say that I think I understand a lot of the concerns people have with Electron-like platforms better than I did before, and thank you for the conversation.

Cheers!

1. 10

Any post that calls electron ultimately negative but doesn’t offer a sane replacement (where sane precludes having to use C/C++) can be easily ignored.

1. 10

There’s nothing wrong with calling out a problem even if you lack a solution. The problem still exists, and brining it to people’s attention may cause other people to find a solution.

1. 8

There is something wrong with the same type of article being submitted every few weeks with zero new information.

1. 1

Complaining about Electron is just whinging and nothing more. It would be much more interesting to talk about how Electron could be improved since it’s clearly here to stay.

1. 4

it’s clearly here to stay

I don’t think that’s been anywhere near established. There is a long history of failed technologies purporting to solve the cross-platform GUI problem, from Tcl/tk to Java applets to Flash, many of which in their heydays had achieved much more traction than Electron has, and none of which turned out in the end to be here to stay.

1. 2

I seriously doubt much of anything, good or bad, is here to stay in a permanent sense

1. 2

Thing is that Electron isn’t reinventing the wheel here, and it’s based on top of web tech that’s already the most used GUI technology today. That’s what makes it so attractive in the first place. Unless you think that the HTML/Js stack is going away, then there’s no reason to think that Electron should either.

It’s also worth noting that the resource consumption in Electron apps isn’t always representative of any inherent problems in Electron itself. Some apps are just not written with efficiency in mind.

2. 5

Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

1. 4

Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

It’s always been insane, you can tell by the fact that those programs “crashing” is regarded as normal.

In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

Shipping a cross-platform native app written in Python with PyQt or similar is a royal pain. Possibly no real technical work would be required to make it as easy as electron, just someone putting in the legwork to connect up all the pieces and make it a one-liner that you put in your build definition. Nevertheless, that legwork hasn’t been done. I would lay money that the situation with Smalltalk/Racket/Factor is the same.

Java Swing has just always looked awful and performed terribly. In principle it ought to be possible to write good native-like apps in Java, but I’ve never seen it happen. Every GUI app I’ve seen in Java came with a splash screen to cover its loading time, even when it was doing something very simple (e.g. Azureus/Vuze).

1. 1

Writing C++ has been insane for decades, but not for the reasons you mention. Template metaprogramming is a weird lispy thing that warps your mind in a bad way, and you can never be sane again once you’ve done it. I write C++ professionally in fintech and wouldn’t use anything else for achieving low latency; and I can’t remember the last time I had a crash in production. A portable GUI in C++ is so much work though that it’s not worth the time spent.

2. 1

C++ the language becomes better and better every few years– but the developer tooling around it is still painful.

Maybe that’s just my personal bias against cmake / automake.

1. 3

I think people need to accept that this ship has sailed. Electron is here to stay, and people will build more and more apps on top of it. This is the reality, and it’s unlikely to change in the foreseeable future because it’s by far the easiest option of writing and maintaining cross-platform applications.

My view is that it would be far more productive to focus on how Electron runtime can be improved. Reducing the footprint for Electron would improve all the apps built on top of it for free. If you’re not happy with the bloat then try and help to figure out how to reduce it. Moaning that Electron is cancer isn’t helping anybody.