1. 3

Cool stuff, I was just playing with boids a couple of weeks ago. Here’s my version using ClojureScript.

1. 4

Nice! Here’s a video of a version I did in Ogre3D using Common Lisp: https://www.youtube.com/watch?v=INeUifM2Bhg (Sheesh, has it been 9 years already?! Don’t have kids if you value your time people!)

This demo was part of Ogre3D bindings I wrote. I’d have to upload them to GitLab since I took all my projects off GitHub a while ago.

1. 1

That’s pretty awesome, driving Ogre3D from the REPL with CL would be a lot of fun.

1. 2

Easiest way to try this is using the lein-native-image plugin. It’s got a couple of example projects in the repo.

1. 2

I use Scala daily for exactly the reasons they mention for using Clojure. There are probably good reasons for why they chose Clojure over Scala, but that whole Scala Vs. OCaml Vs. Haskell Vs. Clojure section seems a bit hand-wavy (both Clojure and Scala have and use monads).

1. 10

You can use monads in any language, but they’re central in Haskell because side effects are tracked by the type system and monads are the mechanism for doing that. Overall, my experience is that Clojure is a much simpler language than Scala, and it’s easier to train people to use it effectively. The tooling is better, and you don’t have to deal with crazy compile times or bytecode compatibility problems. The language is very stable, libraries from years ago will still run just fine and you can update to newer versions of Clojure by just bumping up the library version. There’s also a comprehensive web story from front-end development and sharing code with the backend. I know Scala.js exists, but it’s not nearly as mature as ClojureScript ecosystem. These are some reasons for why my team picked Clojure over Scala.

1. 1

Did you consider / why did you reject some combination of jython, RPython, or PyPy? At least jython would have given you the benefits of the jvm without rewriting (at least mostly).

1. 1

I’m not affiliated with AppsFlyer, and my team moved to Clojure from Java. I was just giving the reasons for why we picked it over Scala.

1. 13

I think this is an incredibly important development. Seeing what Intel, and to lesser extend AMD, have been doing with their chips is downright scary. Now that they embed their own OS in the chip that you have no access to or control over, the users are becoming locked out of their own hardware. We need open architectures to retain general purpose computing where anybody is able to write and run their own code.

1. 2

Strongly agree. There’s stuff like OpenBMC for data-centres, but it’s hard to do when everything’s locked down by vendors.

1. 1

Alas, we still need the volumes to go way way up to compete with ARM (and Intel) on price.

Devkits for these SoC’s are currently way way more expensive than a Rpi for example.

1. 1

I definitely find there’s a strong relationship between language complexity and bikeshedding. When you have a big language like Haskell or Scala, it’s easy to get distracted from solving the actual problem by trying to do it the most “proper” way possible. This is also how you end up with design astronautics in enterprise Java as well where people obsess over using every design pattern in the book instead of writing direct and concise code that’s going to be maintainable.

Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.

1. 5

Purpose of ORMs is not to make SQL “easier” or “cleaner” but to make SQL composable. Otherwise you have to compose SQL queries by concatenating strings.

The problem is that SQL itself is the most garbage query language. If it was based on data structures such as json or s-expressions or whatever, instead of “fake natural language”, ORMs would be mostly unnecessary.

1. 3

Honey SQL uses s-expressions to represent SQL, and it maps directly to SQL syntax so there’s no additional indirection. The semantics are preserved, and you’re just using a different syntax to express them.

1. 1

As much as I can agree with the basic premise I think the author misses the other side is that even in the traditional RDBMS space SQL is slightly different. This is the reason that although I end up having to dive in the SQL Alchemy docs quite frequently when I use that library it tends to guarantee my program can be used more easily with more DB without having to worry about their individual quirks in how SQL is implemented.

And dare I say it I kind of like mongos query language, once you get your head around it, there are some cool things it can do. It is not the slick declarative syntax of SQL, but it has its merit for that domain.

1. 2

If a project has an explicit requirement to support multiple databases, then SQL differences can become a problem. However, my experience is that it’s not the case for most projects. Typically you pick a database that you’re going to be using up front, and you’re not going to be switching databases in the middle of development.

I also think that the approach taken by Honey SQL is a good middle ground. It represents queries as structured data, so you’re able to easily manipulate and compose them programmatically. However, it maps directly to SQL avoiding the problem of having a leaky abstraction on top of it.

1. 0

It most likely took you more time to write this rant than it takes to be productive in a new ORM.

1. 9

I’m not the author, but I’ve used many ORMs over the years and my experience is that the approach simply doesn’t work well in practice. It’s pretty much impossible to generate efficient queries automatically for a general case. In most cases you’re going to be writing SQL for performance unless your app deals with trivial amounts of data. The worst part is that ORMs look like they work during development because you’re not hitting heavy loads, and you often end up seeing problems when you’re already in production.

SQL is already a great DSL for writing relational queries, and there’s no real value in wrapping it with another leaky abstraction. Personally I really like the approach that HugSQL takes which lets you write plain SQL queries and automatically generate query functions from them.

1. 1

The problem with raw SQL, though, is that it really gets in your way if you ever decide to switch SQL backends. Like, maybe this is more of a startup-y issue but having the ability to safely switch between them is super helpful when the time comes that you’ll need it.

I think that the bigger concern with ORMs is that so many exist that are poorly designed. It’s really is a hard problem, and choosing the right one is difficult as well. Some software like Django has done a great job of writing great ORMs, but other platforms like NodeJS have made it hell.

I’d prefer raw SQL in NodeJS simply because the ORMs aren’t there, but the state of ORMs in any ecosystem is a completely separate issue than whether ORMs are good or not.

1. 1

I wouldn’t go so far as to call it “great”, but it’s good enough for most things. At the very least I wish it were hard to write queries that are prone to injection attacks.

In general I agree that most ORMs and query DSLs fall down in one of two ways: they aren’t general enough and fall apart when you want to do anything complex (e.g. activerecord), or they try to do everything, but the syntax is so arcane that raw SQL is easier (e.g. almost every relational query DSL I ever saw before activerecord became popular).

1. 2

Note that HugSQL addresses the injection attack problem because queries are inherently parametrized. You write the SQL with placeholders for the variables, and then you pass in a map keys on the names of the placeholders. The values are automatically sanitized when they’re injected.

1. 2

Yeah, my complaint isn’t with SQL per se, but with how SQL is often used in application frameworks.

1. 27

I’d like to provide a more sympathetic outside perspective.

There are a few common complaints about Elm and Elm community:

• Blocking discussion (locking threads etc.)
• Insufficient communication about Elm development
• Things getting removed from the language.

With regards to blocking discussion, I think the logic is something like this:

• The core developers have a roadmap for Elm development and they want to stick to it
• They tried including more developers but haven’t found an effective way to deal with more contributors
• Therefore, they have limited time
• They can spend this time rehashing the same arguments and debating half-baked ideas, or they can spend this time following their roadmap, but not both.

I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

With regards to insufficient communication, I think the main things to understand is that Elm is an experiment in doing things differently, and it’s causing a clash with conventional understanding. Elm is about getting off the upgrade treadmill. So, for example, when a new release like Elm 0.19 comes out, it happens without a public alpha and beta phases, and it’s not actually the point where you go and immediately migrate your production code to it! It’s only the point to start experimenting with it, it’s the point where library and tool authors can upgrade and so on. (There was quite a bit of activity prior to release anyway, it just wasn’t advertised publicly.)

Finally, the most contentious example of a “feature” getting removed is the so called native modules (which basically means the ability to have impure functions written in JS in your Elm code base). As far as I can tell (having followed Elm since 0.16), native modules were always an internal implementation detail and their use was never encouraged. Nevertheless, some people started using them as a shortcut anyway. However, they were a barrier to enabling function-level dead code elimination which is the main feature of the 0.19 release, so the loophole was finally closed. Sure, it’s inconvenient for people who used them, but does anyone complain when, say, Apple removes an internal API?

Ultimately, Elm is just an open source project and the core maintainers don’t really owe anybody anything - no contracts are entered into and no funds are exchanged. They can do whatever they want.

Of course, there is a question of the long term effects this approach is going to have on the community. Will it alienate too many people and cause Elm to wither? Will Elm remain a niche language for a narrow class of applications? That remains to be seen.

1. 23

but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time.

Over the years, I have come to believe this is a vital part of building a community. Using draconian tactics to stomp out annoying comments is using power unwisely and worse yet – cripples your community in multiple ways.

The first thing to remember is that when a comment (entitled, uninformed or otherwise) comes up repeatedly – that is a failure of the community to provide a resource to answer/counter/assist with that comment. That resource can be a meme, a link, an image, a FAQ, a full on detailed spec document, whatever. This type of thing is part of how a community gets a personality. I think a lot of the reason there are a bunch of dead discourse servers for projects is too stringent policing. You should have a place for people to goof off and you have to let the community self police and become a real community. Not entirely, obviously, but on relevant topics.

This constant repetition of questions/comments is healthy, normal, it is the entrance of new people to the community. More importantly, if gives people who are just slightly deeper in the community someone to help, someone to police, someone to create resources for, even to a degree someone to mock (reminding them they aren’t THAT green anymore) – a way to be useful! This is a way to encourage growth, each “generation” of people helps the one that comes after them – and it is VITAL for building up a healthy community. In a healthy community the elders will only wade in occasionally and sporadically to set the tone and will focus on the more high minded, reusable solutions that move the project forward. Leave the minor stuff be done by the minor players, let them shine!

Beyond being vital to build the community – it is a signal of where newcomers are hurting. Now if documentation fixes the problem, or a meme… terrific! But if it doesn’t, and if it persists … that is a pain point to look at – that is a metric – that is worth knowing.

1. 5

Yeah, each one of these people gives you a chance to improve how well you communicate, and to strengthen your message. But shutting down those voices then run the risk of surrounding yourself with ‘yes people’ who don’t challenge your preconceptions. Now, it’s entirely up to the Elm people to do this, but I think they are going to find it harder to be mainstream with this style of community.

Note that I’m perfectly fine with blocking and sidelining people who violate a CoC, or are posting hurtful, nonconstructive comments. You do have to tread a fine line in your moderation though. Being overly zealous in ‘controlling the message’ can backfire in unpredictable ways.

Anyway, I continue to follow Elm because I think the designer has some excellent ideas and approaches, even if I do disagree with some of the ways the way the community is managed.

1. 5

even if I do disagree with some of the ways the way the community is managed.

I don’t think the two jobs (managing the community and managing the project) should necessarily be done by the same person. I actually think it probably shouldn’t. Each job is phenomenally challenging on its own – trying to do both is too much.

1. 2

Yeah, completely agree! I think it would take a huge weight of that person’s shoulders too! :)

1. 1

I don’t think Evan personally moderates the forums. Other people do it these days.

1. 3

But, they do it on his behalf? This policy of locking and shutting down discussions comes from somewhere. That person directly or indirectly is the person who “manages” the community, the person who sets the policies/tone around such things.

I personally have no idea, I am not active in the Elm community.

1. 1

I’m not sure who sets the policy and how.

2. 2

That’s a very interesting perspective, thanks.

3. 12

I’ll add the perspective of someone who loved Elm and will never touch it again. We’re rewriting in PureScript right now :) I’m happy I learned Elm, it was a nice way of doing things while it lasted.

In Elm you may eventually hit a case where you can’t easily wrap your functionality in ports, the alternative to native modules. We did, many times. The response on the forum and other places is often to shut down your message, to give you a partial version of that functionality that isn’t quite what you need, to tell you to wait until that functionality is ready in Elm (a schedule that might be years!), or until recently to point you at native modules. This isn’t very nice. It’s actually very curious how nice the Elm community is unless you’re talking about this feature, in which case it feels pretty hostile. But that’s how open source rolls.

Look at the response to message linked in the story: “We recently used a custom element to replace a native modules dealing with inputs in production at NoRedInk. I can’t link to it because it’s in a proprietary code base but I’ll be writing an speaking about it over the next couple months.”

This is great! But I can’t wait months in the hope that someone will talk about a solution to a problem I have today. Never mind releasing one.

Many people did not see native modules as a shortcut or a secret internal API. They were an escape valve. You would hit something that was impossible without large efforts that would make you give up on Elm as not being viable. Then you would overcome the issues using native modules which many people in the community made clear was the only alternative. Now, after you invest effort you’re told that there’s actually no way to work around any of these issues without “doing them the right way” which turns out to be so complicated that companies keep them proprietary. :(

I feel like many people are negative about this change because it was part of how Elm was sold to people. “We’re not there yet, but here, if we’re falling short in any way you can rely on this thing. So keep using Elm.”

That being said, it feels like people are treating this like an apocalypse, probably because they got emotionally invested in something they like and they feel like it’s being changed in a way that excludes them.

You’re right though. Maybe in the long term this will help the language. Maybe it will not. Some people will enjoy the change because it does lead to a cleaner ecosystem and it will push people to develop libraries to round out missing functionality. In the short term, I have to get things done. The two perspectives often aren’t compatible.

I’m personally more worried about what will happen with the next major change where Elm decides to jettison part of its community. I don’t want to be around for that.

1. 2

If people encouraged you to use native modules, then that was unfortunate.

I’m not sure I understand the issue with custom elements. Sure, they’re a bit complicated and half baked but it certainly doesn’t require a research lab to use them (in fact, I’ve just implemented one now).

I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

2. 10

I would prefer that the discussions weren’t removed or locked, but on the other hand, it’s got to be grating to deal with the same entitled, uninformed or complaining comments all the time. I’ve read most of these discussions, and other than people venting, nothing is ever achieved in them. My reflexive reaction is to be uncomfortable (like a lot of other people) but then, there is also a certain clarity when people just say that they will not engage in a discussion.

I’ll go one further and say I’m quite glad those discussions get locked. Once the core team has made a decision, there’s no point in having angry developers fill the communication channels the community uses with unproductive venting. I like the decisions the core team is making, and if those threads didn’t get locked, I’d feel semi-obligated to respond and say that I’m in favor of the decision, or I’d feel guilty not supporting the core devs because I have other obligations. I’m glad I don’t have to wade through that stuff. FWIW, it seems like the community is really good at saying “We’re not going to re-hash this decision a million times, but if you create a thread about a specific problem you’re trying to solve, we’ll help you find an approach that works” and they follow through on that.

I don’t have a lot of sympathy for folks who are unhappy with the removal of the ability to compile packages that include direct JS bindings to the Elm runtime. For as long as I’ve been using Elm the messaging around that has consistently been that it’s not supported, it’s just an accidental side effect of how things are built, and you shouldn’t do it or you’re going to have a bad time. Now it’s broken and they’re having a bad time. This should not be a surprise. I also think it’s good decision to actively prohibit it. If people started using that approach widely, it would cause a lot of headaches for both the community and hamstring the core team’s ability to evolve the language.

1. 6

I’m quite glad those discussions get locked

and

I like the decisions the core team is making

Do you believe your perspective would change if you didn’t agree with the developers decisions? Obviously I have a different perspective but I am curious if think you would still have this perspective if you were on the other side?

Additionally, just because the core team has “made a decision” doesn’t mean it wasn’t a mistake, nor that it is permanent. Software projects make mistakes all the time, and sometimes the only way to really realize the mistake is the hear the howls of your users.

1. 3

I’m pretty confident I wouldn’t change my position on this if I wasn’t in agreement with the core team’s choices. I might switch to PureScript or ReasonML, if I think the trade-offs are worth it, but I can’t see myself continuing to complain/vent after the decision has been made. I think appropriate user input is “I have this specific case, here’s what the code look like, here’s the specific challenge with any suggested alternative” If the core team decides to go another way after seeing their use cases, it’s clear we don’t have the same perspective on the trade-offs for those decisions. I can live with that. I don’t expect everybody to share my opinion on every single technical decision.

As an example, I use Clojure extensively at work, and I very much disagree with Rich Hickey’s opinions about type systems, but it’s pretty clear he’s thought through his position and random folks on the internet screaming differently isn’t going to change it, it’ll just make his job more difficult. I can’t imagine ever wanting to do that to someone.

sometimes the only way to really realize the mistake is the hear the howls of your users

It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl. They can usually speak pretty clearly about how decisions impact their work and are able to move on when it’s clear there’s a fundamental difference in goals.

1. 2

It’s been my experience that the folks who can provide helpful feedback about mistaken technical decisions rarely howl.

We fundamentally disagree on this point (and the value of the noisy new users), and I don’t think either of us is going to convince the other. So, I think this is a classic case of agree to disagree.

2. 10

I think what bothers me the most about the core team’s approach to features is not that they keep removing them, but that for some they do not provide a valid alternative.

They’ll take away the current implementation of native modules, but coming up with a replacement is too hard, so even though the core libraries can use native code, us peasants will have to do without.

They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard, so even though the base library has a few magic typeclasses for its convenience, us peasants will have to make do with mountains of almost duplicated code and maybe some code generation tool.

So where does that leave Elm right now? Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language, because of all the breaking changes between releases, all the things that haven’t been figured out yet, and how the response to requests for ways to do things that are necessary in real code is either “you don’t need that”, which I can live with most of the time, or “deal with it for the moment”, which is unacceptable.

I think Elm should make it more clear that it’s ostensibly an unfinished project.

1. 3

They’ll take away the current implementation of native modules, but coming up with a replacement is too hard

They won’t add a mechanism for higher rank polymorphism because coming up with a good way to do it is hard

I don’t think this is a fair characterization of the core team’s reasons for not supporting those features. I’ve read/watched/listened to a lot of the posts/videos/podcasts where Evan and other folks discuss these issues, and I don’t think I’ve ever heard anyone say “We can’t do it because it’s too difficult.” There’s almost always a pretty clear position about the trade-offs and motivations behind those decisions. You might not agree with those motivations, or weigh the trade-offs the same way, but it’s disingenuous to characterize them as “it’s too hard”

1. 4

I exaggerate in my comment, but what I understood from the discussions around rank n polymorphism I’ve followed is basically that Evan doesn’t think any of the existing solutions fit Elm.

I understand that language design, especially involving more complex features like this, is a hard issue, and I’m sure Evan and the core team have thought long and hard about this and have good reasons for not having a good solution yet, but the problem remains that hard things are hard and in the meantime the compiler can take an escape hatch and the users cannot.

2. 2

Should it be considered a production-ready tool just by virtue of not having very frequent releases? Or should it be regarded as an incomplete toy language

I always struggle with this line of questioning because “incomplete and broken” describes pretty much all of the web platform in the sense that whenever you do non-trivial things, you’re going to run into framework limitations, bugs, browser incompatibilities and so on.

All you can do is evaluate particular technologies in the context of your specific projects. For certain classes of problems, Elm works well and is indeed better than other options. For others, you’ll have to implement workarounds with various degrees of effort. But again, I can say the same thing for any language and framework.

Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

Taking a tangent, the main problem is that Elm is being built on top of the horrifically complex and broken foundation that is the web platform. It’s mostly amazing to me that anything works at all.

1. 10

Is it good that it’s so easy to bump up against bugs and limitations? No. But at least Elm is no worse than anything else.

Having worked with ClojureScript on the front-end for the past 3 years, I strongly disagree with this statement. My team has built a number of large applications using Reagent and whenever new versions of ClojureScript or Reagent come out all we’ve had to do was bump up the versions. We haven’t had to rewrite any code to accommodate the language or Reagent updates. My experience is that it’s perfectly possible to build robust and stable tools on top of the web platform despite its shortcomings.

1. 4

I have the opposite experience. Team at day job has some large CLJS projects (also 2-3 years old) on Reagent and Re-Frame. We’re stuck on older versions because we can’t update without breaking things, and by nature of the language it’s hard to change things with much confidence that we aren’t also inadvertently breaking things.

These projects are also far more needlessly complex than their Elm equivalents, and also take far longer to compile so development is a real chore.

1. 6

Could you explain what specifically breaks things in your project, or what makes it more complex than the Elm equivalent. Reagent API had no regressions in it that I’m aware of, and re-frame had a single breaking change where the original reg-sub was renamed to reg-sub-raw in v 0.7 as I recall. I’m also baffled by your point regarding compiling. The way you develop ClojureScript is by having Figwheel or shadow-cljs running in the background and hotloading code as you change it. The changes are reflected instantly as you make them. Pretty much the only time you need to recompile the whole project is when you change dependencies. The projects we have at work are around 50K lines of ClojureScript on average, and we’ve not experienced the problems you’re describing.

2. 2

I think the ease of upgrades is a different discussion. There is a tool called elm-upgrade which provides automated code modifications where possible. That’s pretty nice, I haven’t seen a lot of languages with similar assistance.

My point was, you cannot escape the problems of the web platform when building web applications. Does ClojureScript fully insulate you from the web platform while providing all of its functionality? Do you never run into cross-browser issues? Do you never have to interoperate with JavaScript libraries? Genuinely asking - I don’t know anything about ClojureScript.

1. 3

My experience is that vast majority of issues I had with the web platform went away when my team started using ClojureScript. We run into cross-browser issues now and then, but it’s not all that common since React and Google Closure do a good job handling cross-browser compatibility. Typically, most of the issues that we run into are CSS related.

We interoperate with Js libraries where it makes sense, however the interop is generally kept at the edges and wrapped into libraries providing idiomatic data driven APIs. For example, we have a widgets library that provides all kinds of controls like data pickers, charts, etc. The API for the library looks similar to this to our internal widgets API.

1. 1

Sounds like a great development experience!

Let me clarify my thinking a bit. For a certain class of problems, Elm is like that as well. But it certainly has limitations - not a huge number of libraries etc.

However, I think that pretty much everything web related is like that - limitations are everywhere, and they’re much tighter than I’d like. For example, every time I needed to add a date picker, it was complicated, no matter the language/framework. But perhaps your widgets library has finally solved it - that would be cool!

So I researched Elm and got a feel for it’s limitations, and then I could apply it (or not) appropriately.

I would agree, however, that the Elm developers have a bit of a hardline approach to backward compatibility. Perhaps there is a misunderstanding around the state of Elm - ie whether it’s still an experiment that can break compatibility or a stable system that shouldn’t.

I’m not sure how I feel about backward compatibility. As a user, it’s very convenient. As a developer, it’s so easy to drown in the resulting complexity.

1. 3

Yeah, I agree that the main question is around the state of Elm. If the message is that Elm isn’t finished, and don’t invest into it unless you’re prepared to invest time into keeping up, that’s perfectly fine. However, if people are being sold on a production ready language that just works there appears to be a bit of a disconnect there.

It’s obviously important to get things right up front, and if something turns out not to work well it’s better to change it before people get attached to it. On the other hand, if you’re a user of a platform then stability is really important. You’re trying to deliver a solution to your customers, and any breaking changes can become a serious cost to your business.

I also think it is important to be pragmatic when it comes to API design. The language should guide you to do things the intended way, but it also needs to accommodate you when you have to do something different. Interop is incredibly important for a young language that’s leveraging a large existing ecosystem, and removing the ability for people to use native modules in their own projects without an alternative is a bit bewildering to me.

3. 7

To me the problem is that Elm is not conceptually complete. I listed those issues specifically because they’re both things that the compiler and the core libraries can do internally, but the users of the language cannot.

But at least Elm is no worse than anything else.

No, Elm is a language, and not being able to do things in a language with so few metaprogramming capabilities is a pretty big deal compared to a missing feature in a library or a framework, which can easily be added in your own code or worked around.

1. 1

But how is this different from any other ecosystem? The compiler always has more freedom internally. There are always internal functions that platform APIs can use but your library cannot. Following your logic, we should condemn the Apple core APIs and Windows APIs too.

1. 3

No, what I meant is that the core libraries use their “blessed” status to solve those problems only for themselves, thus recognizing that those problems effectively exist, but the users aren’t given any way to deal with them.

1. 2

But there are actually solutions on offer: ports and custom elements. What’s wrong with using them?

1. 4

Ports are very limiting and require much more work to set up than a normal library, and I haven’t used custom elements so I can’t speak for those.

There’s also no workaround for the lack of ad-hoc polymorphism. One of the complaints I hear the most about Elm is that writing json encoders and decoders is tedious and that they quickly become monstrously big and hard to maintain; often the json deserialization modules end up being the biggest modules in an Elm project.

This is clearly a feature the language needs (and already uses with some compiler magic, in the form of comparable, appendable, and so on).

2. 2

I was hoping to read other points of view on that matter, thanks for taking the time writing down yours!

1. 0

Should it be legal to rob a bank and turn yourself in?

1. 4

More like should it be legal to show customers of the bank that it has a hole in the wall covered up with some wallpaper.

1. 1

Too many defects can’t be uncovered, but must be used before anybody will think “maybe we should fix that.” So no, tedu made a good point. Just because the bank doesn’t lose doesn’t mean you didn’t wander into the vault then take things out as proof.

1. 1

I strongly disagree. As a customer of a product or a service I would want to know about the possible exploits against that service. Whether the exploits are made public or not, they’re still there and your data is still vulnerable.

1. 1

Agreed but proving a computer security vulnerability often involves getting either pre-placed data or a random person’s data, at least from the people I’ve read. It’s not quite holding up a bank at gunpoint, but it’s definitely going through the hole in the drywall.

2. 2

Basically, should it be legal to do a physical security pen test without a contract.

1. 2

I get that core/async Clojure code follows the Go channel example fairly close, but it seems misleading wrt the actual problem of starting two expensive computations and aggregating their results.

This can be done in Clojure using futures as succinctly as it’s shown to be in Scala.

1. 3

Yup, Scala example could be translated directly as:

(defn long-comp-1 [] 5)
(defn long-comp-2 [] 6)

(defn aggregate [i j]
(println "i is" i "j is" j))

(def result-1 (future (long-comp-1)))
(def result-2 (future (long-comp-2)))

(defn aggregated-future []
(aggregate @result-1 @result-2))

1. 3

My blog is primarily about Clojure https://yogthos.net/

1. 4

From 2016, but based on https://github.com/nextjournal it looks like they still use clojurescript. Would be great to see a 2018 followup.

Also @Yogthos what are your thoughts on https://github.com/yogthos/clojure-error-message-catalog 2 years later? Is it still useful/necessary? Did it see traction/adoption? Is it being phased-out by upstream efforts?

1. 4

I got the inspiration for the error message catalog from Elm and it’s mostly a stop gap measure to help with most common gotchas. I don’t have the impression that it got a lot of adoption over the past couple of years which might indicate that most people aren’t struggling too much with the error message (or perhaps it just wasn’t discoverable enough). It also looks like Clojure 1.10 will have significantly improved error messages out of the box, and spec allows for tools like expound that produce human readable errors. I’m hopeful that the errors catalog won’t be necessary going forward.

1. 3

The link is to a very useless spam quora question. The email provider does exist (see wikipedia: https://en.wikipedia.org/wiki/Tutanota), but there’s no indication it’s particularly a Gmail alternative rather than just another mail provider.

1. 2

The link provides a good rationale for how the company operates and what their goals are. Their product is open source, and can be self-hosted, which is quite a bit more than just being another email provider in my opinion.

1. 1

1. 2

Looks like the site structure changed, here’s the new link.

1. 7

Worth noting that s-expressions avoid a lot of legibility problems discussed in the article. If we look at the first example under the “providing immediate feedback” section where traditional notation looks like:

50.04 + 34.57 + 43.22 / 3


this would be expressed as:

(+ 50.04 34.57 (/ 43.22 3))


which would be hard to confuse with:

(/ (+ 50.04 34.57 43.22) 3)


A lot of people seem to have the impression that s-expressions are harder to read than traditional syntax, but I find the opposite to be the case. With s-expressions you have simple and predictable rules that remove a lot of mental overhead around figuring out what the code is doing.

1. 2

Similarly just having the same precedence and associativity for everything would give you an easy-to-predict and easy-to-read syntax. This way you gain terseness, but you have to get used to the associativity of whatever mechanism you’re using, whereas s-expressions (or *shudder* XML, etc) are more portable, but require you to explicitly state the tree with more characters.

For example, right associative:

50.04 + 34.57 + 43.22 / 3


And for the sum of everything over three, it would be:

(50.04 + 34.57 + 43.22) / 3


This is the style that APL/J/K and various languages inspired by them tend to use (they also add different precedence for certain operations that take another operation as one of their inputs, such as fold). Many people use such languages as an enhanced calculator (there are plotting utilities made for them, etc). For example, in K, where division is % and assignment is ::

force: (6.67e-11*mymass*collidingmass)%radius*radius
yearlybill: 12*rent+electric+internet


Or with functions, where / is fold:

force:{[m1;m2;radius](6.67e-11*m1*m2)%radius*radius}
yearlybill:{[monthlyutilities]12*+/monthlyutilities}

1. 1

Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

I don’t know why people have such a problem with a + b + c / 3 meaning a + b + (c / 3). It’s just something you have to get used to, it’s not really that difficult and there are much bigger problems that need solving. But if it’s really such a big deal, just make it a function \frac{a + b + c}{3} in LaTeX is good enough for mathematicians, so frac(a + b + c, 3) should be good enough for programmers.

1. 1

Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

I don’t know why people have such a problem with 1 * 2 + 3 and 3 + 1 * 2 meaning different things. It’s just something you have to get used to when using a different language, it’s not really that difficult and there are much bigger problems that need solving.

1. 2

The universal rules of mathematical expressions create a strong precedent. People expect them to hold. They get confused when they don’t. Even if they are arbitrary.

I’m not aware of any language anywhere in all of programming or mathematics that uses different rules and has sustained any kind of popularity. Seems like a hard requirement to ever be successful in my experience.

1. 1

They aren’t “universal”. See my other comment. Sustained any kind of popularity is a vacuous statement Forth is used extensively in embedded applications. Your calculator uses a left to right operator precedence and yet you don’t struggle to translate from PEMDAS or whatever system you use.

1. 2

They are absolutely universal. All mathematicians agree on the order of operations here.

1. 2

Funny because every mathematician I’ve talked to, and listened to about order ambiguity agrees with me and says you should put parentheses to disambiguate.

The reality is that because it is cultural means it does not matter if you have a solution to the problem if not everyone is using it. In my opinion abandoning order of operations is much simpler and the order is arbitrary, needlessly convoluted, and does not afford for the expansion of operators. You can make things abundantly clear by using polish notation.

- / 2x 3y 1

Before you throw your arms up in frustration yes there are proofs done in this format, and they’re great.

1. 0

because it is cultural

Yeah but it isn’t cultural. It’s universal. as I’ve explained

1. 1

I suppose since it is universal that there are severe pedagogical deficiencies, which doesn’t surprise me terribly. Still would have been completely avoided with a simpler and clearer precedence system. It took me a while to realize that you were talking about strictly mathematicians whereas I was talking about all people. Apologies for my poor communication.

2. 1

“Order of operations” have been an arbitrary curse on mathematics since their creation, different cultures don’t actually agree, in addition it restricts the creation of new operators. I’m not particularly invested in left to right or right to left, but either would be much simpler than the random format we have now.

1. 2

Cultures that don’t use ÷ and × often don’t write sentences left-to-right and pages top-to-bottom. They might not even use arabic numerals.

I don’t see how it restricts the creation of new operators. Mathematicians seem to have no problem introducing new operators: ∧, ∨, →, ↔, dots, existing operators in circles and all sorts of silly new operators are used all over algebra without any real issue. If it’s not obvious from context, you put brackets in.

1. 1

What order precedence does modulus have? Is it the same as division or should it be done first, or last? If we had a order precedence that can accomodate new operators this question wouldn’t need to be asked and I wouldn’t have to use parentheses which lets be honest is a hack.

1. 1

Modulus isn’t a standard mathematical operator. But if you defined it, you could just say what its precedence is.

3. 1

wait are you using PEMDAS or BODMAS?

1. 1

Same thing. Brackets = parenthesis, multiplication and division are done at the same time and so their order is whatever sounds better when reading out the abbreviation. What synonym of exponent does ‘O’ stand for?

1. 1

Multiplication and division are not done at the same time. Orders I believe. http://www.math.harvard.edu/~knill/pedagogy/ambiguity/

1. 1

Multiplication and division are always done at the same time (with left-associativity - a÷b÷c = (a÷b)÷c in mathematics and this follows over into programming languages that use * and / to emulate × and ÷.

2x/3y-1 is not well-defined notation. It’s not mathematics, because mathematics doesn’t use a slash in the middle of some linear text for division (it uses a horizontal line or ÷ depending on the context, although really depending on the level, because I haven’t seen anyone use ÷ since primary school), and it’s not any programming language I’m aware of either. Randomly writing down some text then claiming it’s ambiguous is pretty silly.

2 × x ÷ 3 × y - 1 is completely unambiguous, on the other hand: (((2 × x) ÷ 3) × y) - 1. Try putting it into google, or asking someone what 2 × 9 ÷ 3 × 2 - 1 is. Their answer is 11.

Mathematicians almost never use ÷ anyway, we write (2 x) / (3 y) where the line is horizontal (not possible on this platform as far as I can tell). But the same rule applies to addition and subtraction: 2 + x - 3 + y - 1 is universally agreed to be (((2 + x) - 3) + y) - 1.

Programming languages usually approximate ÷ and × with / and * for the sake of ASCII, so the same rules apply as with those operators. I’m not sure I know of any programming language where you can multiply variables by juxtaposition.

I once saw a proposal that it should be based on whitespace: 1+x * 3+y would be (1 + x) * (3 + y), while 1 + x*3 + y would be 1 + x * 3 + y. I thought it was quite a cute proposal, if perhaps prone to error.

1. 2

Americans use a slash in the middle of linear text to mean division. You clearly didn’t even read the article. Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

1. 1

Americans use a slash in the middle of linear text to mean division.

Don’t think so.

You clearly didn’t even read the article.

The article has a bunch of monospace ASCII.

Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

It’s what literally everybody in the entire world does.

1. 1

From what I see, there appear to be two distinct camps of Scala developers. The first camp consists of OO devs who are looking for a more modern Java, and the second camp consists of FP devs who are looking for a statically typed functional language on the JVM.

Nowadays Java is starting to address many of the gaps while Kotlin provides a much simpler alternative that focuses on the features that OO devs are looking for when considering Scala. I imagine that going forward Scala will become a hard sell for the OO crowd. Meanwhile, Scala is starting to have competition on the FP front from Eta which is a Haskell dialect that runs on the JVM and is able to leverage existing Haskell libraries.

The goal of simplifying the language with Dotty seems like a good idea, but it’s going to create a lot of friction for existing projects similarly to what we’re seeing happening with Python 3. I think that at this point Scala needs to clearly identify what specific problem it addresses that’s not addressed better by other languages.

1. 3

I think your assessment that Scala faces pressure from both Java/Kotlin as well as functional languages is spot on.

https://www.benfrederickson.com/ranking-programming-languages-by-github-users/ indicates that Scala lost 20% of its active users within a year, while Kotlin tripled its amount of active users, surpassing Scala by a large margin already.

The goal of simplifying the language with Dotty seems like a good idea […]

That’s not going to happen. The promises on conference slides do not reflect the reality.

There are already 6 or 7 new keywords, not speaking of other additions, and the cleanups do not depend on Scala 3 at all – in fact many of them have been implemented (and shipped behind a flag) for years already.

1. 2

As others have pointed out, email is a federated model that works just fine. Other federated services are no different in any practical way. The privacy issues with centralized services revolve around them actively working to mine your data. These companies are in the business of making money, and you are their product. This is the fundamental difference between using something like Twitter and Mastodon. Since commercial offerings are trying to monetize you, they have a lot of incentive to invade your privacy by demanding personal information, and to keep you engaged using their services.

The situation is very different with federated services. The code itself is open source allowing people to audit it, and fork it. If a service moves in a direction the community doesn’t like then they can fork the code, and set up new instances.

The federation model is open, meaning that you’re not tied to a specific service. Mastodon, Pleroma, and PeerTube all federate over ActivityPub. This also means that the model is designed for interoperability between services from ground up. Meanwhile, centralized commercial services that try to create walled gardens, and prevent you from moving data between them.

Access to the data for the services is democratized as well. The things you post on a public forum will obviously be public, however only the service provider has the ability to analyze it with commercial service. Federated services like Mastodon have no incentive from keeping the users from accessing the data. I think it’s a better situation where everybody has access to the data the service collects as opposed to just the providers themselves.

The scale advantage comes from the fact that you’re not stuck with a single provider. The system is inherently more robust, and not only in terms of technology. When you have a federation, it’s no longer possible to enforce a single set of rules for everybody. Twitter of Facebook get to choose what content you see, this allows them to sensor content and manipulate the network much more easily that you can with a distributed system.

1. 1

The privacy issues with centralized services revolve around them actively working to mine your data. These companies are in the business of making money, and you are their product.

This is a common error. Centralized doesn’t immediately equal evil or selling you out. It depends on how they’re set up. There’s companies that just sell you a service without trying to send your data to others. FastMail was a popular Gmail alternative whose users say it’s super-fast and stuff, too. MyKolab was a Swiss one I found for \$5 a month with privacy policy. ProtonMail is a recent entry with crypto. HushMail is possibly the oldest of those. ZixCorp has similar services. The PGP company should probably be in this list.

That’s just email. There’s long been solutions doing similar things for chat, backups, mobiles, and so on. There’s just hardly anyone buying them. A few have been around a long time making money. Mostly selling to businesses, though. There is a market but won’t get you rich easily. One can build centralized, non-profit companies with charters protecting privacy. I’ve pushed this a long time. Also, put it in the EULA’s with EU-style penalties for privacy failures if users push for them. Maybe also in the hiring agreement for employees where they can refuse to work on surveillance or privacy-defeating features without termination. On top of that, one might build several of the same company in different countries as a public-benefit multi-national where they sort of check on each other but otherwise operate independently in their own market with tailored solutions.

There’s a lot of potential. There’s also companies taking care of their customers every day using tiny subsets of what I described. Often just owners or company culture that believes in it. People talking like all businesses are evil or have to sell out their customers do those businesses a disservice. Instead, we should try to see what protections we can build on top of those proven models in centralized form before telling people they have no choice but use decentralized stuff. I mean, we can have people developing both in parallel. I even encourage to mitigate impact of failures.

1. 2

Centralization might not immediately equal evil. But it eventually does. As things get bigger, they require more resources, at some non-linear rate.

So the only reason you can use those non-gmail services, the only way they can stay in business without “going evil” is to remain small enough that their requirements remain low. If they got as popular as gmail, they’d be trying to datamine our emails from grandma to sell us shit too.

There’s practically no examples of some large centralized thing not becoming evil. I don’t think this is solvable.

1. 2

Costco and Publix? Vanguard for investing?

1. 1

I’d say, instead, that centralization makes large-scale evil possible. Whether or not somebody steps up to the plate to take advantage of that possibility depends on how long the system exists, how big a scam they can run, and what social systems are in place to prevent it. If something is making a non-zero amount of money and exists for a few years, the likelihood that it’ll become a scam is pretty high.

2. 2

Centralization itself doesn’t equal selling you out, but the business model for current social media companies is what ultimately drives that. The way centralization plays into this is by locking you into the platform once you start using it. For me the biggest value of federation is that it removes central control from the platform. Anybody can run their own instance and manage it the way they see fit, and people can choose what instances they federate with.

I don’t really have any problem with businesses providing services, and as email shows it’s perfectly possible to do that on top of a federated model. My view is that this is a more robust model overall because it prevents companies from dictating how a service will work for everybody.

1. 1

“ but the business model for current social media companies is what ultimately drives that.”

I definitely agree with that. Now we’re in tricky territory, though. The uptake model for social media is it has to be easy to use/understand, preferably free if maximizing participants, and dirt cheap to scale if either free or low-cost. That already disqualifies most decentralized schemes people create. The last thing is people go where other people are. So, to bring in the masses, it needs to sort of already be popular at least among groups of them with some motivation for them to invite their friends. Those people are mostly locked into Facebook and such right now with lots of friends, family photos, etc they might stand to lose.

With that, I’m not sure how to make decentralized, private, social media take off in a big way. It’s one of the only types of applications I have no confidence in. That’s in general, not just decentralized. Only a small number of players even made huge waves. Fewer than that survived with any large usage. We might be stuck with a situation where they stay stuck on social media but we push private messaging as extra medium with other benefits like no limits on characters, immediate delivery, etc. Fortunately, a ton of people already moved to IM. It should be an easier sell than before.

1. 10

On a related note, it’s also worth noting that the user control situation is even worse on mobile devices. You pretty much can’t buy phones or tablets with unlocked firmware that you can easily put your own operating system on.

1. 10

Well there is the Librem at least.

https://puri.sm/shop/librem-5/

1. 1

It is my understanding that even this and Fairphone still require blobs and the baseband is totally opaque. The battle for complete user freedom on mobile still seems to be completely lost.

1. 3

This is correct. Purism routinely exaggerates about what they are able to provide in terms of openness, without any plausible way of actually delivering. It’s quite tiresome.

Not only will Librem 5 have blobs, they’ve now shamelessly announced they intend to use a loophole to procure FSF RYF certification despite this. If this is allowed to stand, it also makes RYF rather meaningless.

2. 7

Also Fairphone:

We offer the ability to choose between the Google experience and the freedom of open source. Both versions are officially supported by Fairphone and we will provide continuous software updates.

In addition, and because the code is openly available, everybody is free to work on making other operating systems work on the Fairphone 2. The community already offers alternative operating systems like Sailfish OS, Ubuntu Touch and LineageOS.

1. 2

Fairphone requires proprietary firmware blobs anyway.

1. 1

Thanks, haven’t seen Fairphone before. I really hope there will be enough of a niche for companies like them and Librem going forward.

1. 5

As a Fairphone user: the market is made by buying the damned phones.

I wish there was an official Sailfish distro. I’m a happy user of the community port, but I also tolerate some glitches. Like not being able to calibrate the proximity sensor or run android apps.

But, as stated, they do have a non-Google android for those who want to be closer to the mainstream and a Google android for people who don’t care that much.

2. 2

You can unlock the bootloader on most Android phones and you can run LineageOS or other AOSP forks, sometimes Ubuntu Touch and Sailfish ports, or postmarketOS.

You typically have to run the vendor android kernel fork if you want to have useful functionality, but some devices (Nexus 5, Nexus 7, Xperia Z2, Xperia Z2 Tablet) can run mainline Linux.

https://wiki.postmarketos.org/wiki/Devices

1. 1

I know that you can unlock the bootloader, but I think that’s very far from ideal. Also the tools themselves tend to be closed source, and sketchy. You should be able to decide what runs on your phone without jumping through hoops.

1. 10

This is the same tired argument in favor of static typing that you see in every blog. The problem is that while the arguments sound convincing on paper, there appears to be a serious lack of empirical evidence to support many of the benefits ascribed to the approach. Empiricism is a critical aspect of the scientific method because it’s the only way to separate ideas that work from those that don’t.

An empirical approach would be to start by studying real world open source projects written in different languages. Studying many projects helps average out differences in factors such as developer skill, so if particular languages have a measurable impact it should be visible statistically. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can then make a hypothesis as to why that is.

For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it’s correct or not. This is pretty much the opposite of what happens in discussions about static typing however, and it’s a case of putting the cart before the horse in my opinion.

The common rebuttal is that it’s just too hard to make such studies, but I’ve never found that to be convincing myself. If showing the benefits is truly that difficult, that implies that static typing is not a dominant factor. One large scale study of GitHub projects fails to show a significant impact overall, and shows no impact for functional languages. At the end of the day it’s entirely possible that the choice of language in general is eclipsed by factors such as skill of the programmers, development practices, and so on.

I think it’s important to explore different approaches until such time when we have concrete evidence that one approach is strictly superior to others. Otherwise, we risk repeating the OOP hype when the whole industry jumped on it as the one true way to write software.

1. 6

One large scale study of GitHub projects fails to show a significant impact overall, and shows no impact for functional languages.

That is not the language used by the authors of the paper:

The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than un-managed.

1. 2

Look at the actual results in the paper as opposed to the language.

2. 3

Annecdote but Typescript exists purely to make an existing language use static types. It has near universal appeal among those who have tried it, and in my experience an entire class of errors disappeared overnight while being having almost no cost at all apart from the one-time transition cost

Meanwhile “taking all projects will average things out” is unlikely to work well. Language differences are rarely just about types, and different languages have different open source communities with different skill levels and expectations

1. 3

As much as I like empiricism and the “there’s not actually that much difference” hypothesis, that article has flaws. In particular, it has sloppy categorization, fex classifying bitcoin as “typescript”. Also, some of its conclusions set off my “wait what” meter, such as Ruby being much safer than python and typescript being the safest language of all.

1. 3

The study has many flaws, and by no means does it provide any definitive answers. I linked it as an example of people trying to approach this problem empirically. My main point is that this work needs to be done before we can meaningfully discuss the impacts of different languages and programming styles. Absent empirical evidence we’re stuck relying on our own anecdotal experiences, and we have to be intellectually honest in that regard.

2. 2

That link doesn’t seem to be working. Is this the same study?: http://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

I think you make very good points (even though I currently have a preference for static types). I’d love to see more empirical evidence.

1. 1

Thanks, and that is the same study. It’s far from perfect, but I do think the general idea behind it is on the right track.

1. 2

I only skimmed the study, but doesn’t it actually show a small positive effect for functional languages? From the study:

Result 2: There is a small but significant relationship between language class and defects. Functional languages have a smaller relationship to defects than either procedural or scripting languages.

I realise that overall language had a small effect on defect rate, and they noted that it could be due to factors like the kind of people attracted to a particular language, rather than language itself.

1. 4

The results listed show a small positive effect for imperative languages, and no effect among functional ones. In fact, Clojure and Erlang appear to do better than Haskell and Scala pretty much across the board:

lang/bug fixes/lines of code changed
Clojure  6,022 163
Erlang  8,129 1,970
Scala  12,950 836

defective commits model
Clojure −0.29 (0.05)∗∗∗
Erlang −0.00 (0.05)
Scala −0.28 (0.05)∗∗∗

memory related errors
Scala −0.41 (0.18)∗
0.73 (0.25)∗∗ −0.16 (0.22) −0.91 (0.19)∗∗∗
Clojure −1.16 (0.27)∗∗∗ 0.10 (0.30) −0.69 (0.26)∗∗ −0.53 (0.19)∗∗
Erlang −0.53 (0.23)∗
0.76 (0.29)∗∗ 0.73 (0.22)∗∗∗ 0.65 (0.17)∗∗∗
Haskell −0.22 (0.20) −0.17 (0.32) −0.31 (0.26) −0.38 (0.19)


The study further goes to caution against overestimating the impact of the language:

One should take care not to overestimate the impact of language on defects. While these relationships are statistically significant, the effects are quite small. In the analysis of deviance table above we see that activity in a project accounts for the majority of explained deviance. Note that all variables are significant, that is, all of the factors above account for some of the variance in the number of defective commits. The next closest predictor, which accounts for less than one percent of the total deviance, is language.

This goes back to the original point that it’s premature to single out static typing as the one defining feature of a language.