Yeah, it looks to be a bunch of BS. Better to use existing law to make a group of non-profits in a number of countries that use traditional, banking techniques with logs, decentralized checking, and some kind of corrective mechanisms. Bankers can probably already tell how to do most of that. It’s all ancient technology. For the currency end, high-assurance engineer Clive Robinson always said just tie one to the value of all kinds of useful commodities or stable currencies. There was in fact one that did it although I can’t remember its name. So, efficient databases run by non-profits chartered to not nickle and dime the customers with distributed checks and optionally a currency-like instrument tied to stable, diverse, real-world commodities/currencies.
EDIT: Most objections end up about what the people running the banks or governments might do. Those risks still exist for Bitcoin with a tiny number of individuals and miners having massive influence. Exchanges getting robbed all the time. Much worse than situation for traditional banking.
Yes, if governments allowed ultra-low-overhead user-friendly automatable money transfer, that would decrease the usefulness of cryptocurrencies somewhat. But they don’t, not in the first world. Look up KYC/AML laws. A notable third-world exception is mpesa, which has done very well.
Those risks still exist for Bitcoin with a tiny number of individuals and miners having massive influence.
Their incentives are aligned with the users of Bitcoin. Miners do well if Bitcoin prospers. The same is not true with the government and my finances. The government does well if they maximize the amount they take from asset holders. Have you ever had your bank account frozen? I have, due to a paperwork error by the state comptroller. That can’t happen with Bitcoin. It’s why I started using it, actually.
Exchanges getting robbed all the time.
That’s why only idiots keep their money in exchanges. This is a non-issue. A big part of the whole point of Bitcoin is that you actually control your assets in a very literal sense, unlike with a bank.
Much worse than situation for traditional banking.
In what way? It’s cheaper, easier, more flexible, better uptime, etc. etc. The only reason I use dollars is that not everyone takes Bitcoin, and dollars have lower short-term volatility. If you know WTF you’re doing, most of your assets won’t be in dollars anyway, so it’s not like the stability of the dollar is a huge benefit when you’re (hopefully) only holding them for a short time anyway.
So, efficient databases run by non-profits chartered to not nickle and dime the customers with distributed checks and optionally a currency-like instrument tied to stable, diverse, real-world commodities/currencies.
This is pretty much what blockchain technology is though? It’s trying to be an efficient distributed and decentralized database. If postgres released a plugin that let it be distributed and decentralized would you call that a bunch of BS?
Why is blockchain tech more BS than any other technology? Are you claiming that it doesn’t work, that it can’t scale, that there is any fundamental flaw with the design? Because otherwise I don’t see how you can call a database BS.
Everyone here seems to railing against blockchain technology because they disagree with some particular use of it, not because there is anything inherently wrong about the technology itself. We all want to make distributed computing easier, blockchain technology aims to do that, but everyone is saying it’s “so complicated”; that’s because distributed computing is complicated and we, the programming world, are still trying to find good solutions to the problems, I personally think blockchain tech is one of the more promising approaches to solve the problems.
“ It’s trying to be an efficient distributed and decentralized database. If postgres released a plugin that let it be distributed and decentralized would you call that a bunch of BS?”
That is quite a strawman. Blockchain is not a distributed postgres. Here’s some key features of popular, blockchain tech that wouldn’t happen in my model built on actual databases:
Energy consumption of miners to create the money. My model either uses existing currencies or commodities or instruments priced against them. The creation aspect takes either nothing or calculations one computer could handle. Also, the current model increases odds that a given currency will become a pyramid scheme to shift most of the wealth to its creators. The mining model increases odds an oligopoly will form as difficulty goes up. Both happened in Bitcoin’s design.
Commit costs are much higher than traditional, strongly-consistent databases. My MasterCard can do a transaction in one second. The bank might delay it further for some analysis. The network itself handles 30,000 a second, though. Blockchains don’t by design so far.
Longevity. Distributed, OSS databases + nonprofits doing at least breakeven + currencies or commodities people already want is much more likely to last over time than startup model around blockchains.
Trustworthiness. Theres several currencies that are very stable, well-managed, and stored in banks with good security. Leveraging that gives quite a headstart on secure, stable banking. The blockchain currencies haven’t been stable or secure.
These are four examples where traditional tech and legal instruments are advantageous over blockchains. The mining cost, performance disadvantages, extra pyramid schemes, oligopoly pressures, and insecure exchanges all make me call BS on blockchains being the “solution” to the problems with ordinary currency and banking. So far, it’s added more problems than it solves.
There is nothing inherent to blockchain that requires “mining”, you need transaction confirmation which is part of any consensus algorithm, it doesn’t matter if you use Ethereum or Raft or anything else, if you want consensus you need some inter-node communication.
There’s nothing inherent in blockchain design that can’t do 30k TPS. I grant you that that speed isn’t there in most implementations yet, but there’s definitely people working on it, and condemning a technology that’s a few years old because it hasn’t had the performance tuning at the same scale as MasterCard is quite a premature condemnation. Here’s an example of a blockchain implementation that can do 8k TPS http://kadena.io/
What? There’s nothing in the blockchain technology that’s owned by anyone?
I’m not even talking about currencies at all.
Your beef is with Bitcoin, not with anything to do with blockchain tech. Git is basically built on blockchain tech, are you saying Git is BS too?
Replace “blockchain” in all your posts with “bitcoin” and I can agree with you.
I believe it. Most Ive seen involve weird schemes for covering costs that could lead go inefficiencies, attacks, and so on. My model is simple: people pay for an account and/or companies providing service get fees for what they do. Proven model. Keeps the tech protocols simple, too. What’s the simplest, payment-oriented blockchain you know of without stuff like mining? Or even a list of them.
Visa isnt hitting 30,000 a second due to massive optimizations: they’re using 1970’s technology throwing money at CPU’s and memory. Modern ones can do 1+ mil a second on under $100k of servers. Spanner does a crapload with strong-ish consistency on geographically separated servers but with 30s pause if you want ordering guarantee when availability takes a hit. Blockchains needing lots of optimizations to reach 1970’s mainframe performance on 2017 servers is a strike against them.
I said longevity, not ownership. Lots of stuff coming and going with other stuff volatile in price. This is a problem dpecific to startups in general but happening extra for those using own currency.
My bad then. Ill just stsy on blockchain points.
Re Git. There are piles of threads about its problems on the Internet. It’s useful and even necessary to participate in a lot of FOSS. I certainly would argue it could be more efficient, available, and secure in its design. It was a good Worse is Better example in how it got popular.
Re Bitcoin. Yeah, it’s the worst of them Ive seen. Glad we agree on af least that.
this is all well and good for folks privileged enough to live in countries with benevolent, effective governments, but for those living under unjust regimes, distributed digital currency is a game changer.
Currency unions seem to be a bad idea for the poorer countries. For example, Greece is unable to devalue their currency because they have the do Euro.
These currencies are also a game-changer for criminals who have been driven near to extinction benevolent, effective governments.
“Think of the criminals/terrorists/drug dealers/etc.”
These arguments aren’t effective anymore. People are innoculated to them after they were repeated ad nauseam during the “war on drugs” and “global war on terror”. The arguments are just as hollow now as they were in the 70s.
I’m also curious what makes you think that criminals were “driven near to extinction” and, beyond that, are suddenly resurging thanks to new currencies. Trends in criminology data support neither of those claims. Crime rates have been decreasing gradually, and they are still doing so.
I’m actually more concerned by money launderers and corrupt government officials. The latter is indeed almost gone in the west, when was the last time you heard of someone having to slip the clerk a twenty under the table to get a driver’s license?
The latter is indeed almost gone in the west
Money seems to control our political system at the highest levels, even if DMV clerks rarely receive bribes.
I’m not inherently concerned by money launderers; “money laundering” (attempting to anonymize ownership) does not, in and of itself, hurt anyone. It’s a victimless crime. The only reason it’s illegal is that it’s easier to prosecute than whatever the target might actually be doing with the laundered money.
However, if you are concerned with money laundering, you shouldn’t be afraid of Bitcoin; multi-billion-dollar crime syndicates just use HSBC.
There are no criminal people, only acts which may be labeled as criminal during some time period.
I will note that most of my personal, hard earned, ethically earned, legally earned capital was eaten by an effective government’s currency controls.
The criminals in charge of that effective government had “legal” means not accessible to the person in the street to freely move currency around.
Strangely enough, currency, no matter how rotten the origin, is gladly welcome in all western countries.
Equally strangely, there tends to be some pretty amazing hurdles to cross for currency to leave…
I must admit my belief in the benevolence of some governments has been tainted.
Their bend over backwards willingness to accept and turn a blind eye to tainted assets coming in, is matched only by their reluctance to let it leave.
Those countries tend to pass laws or otherwise take action to disrupt anything effecting their control. In a lot of them, US dollars or commodities like gold are very valuable vs their own currency. My scheme is a digital version of those whose banks will be in countries like Switzerland with paper or mobile methods of using the service.
That’s actually less risky than currency or banking in a poorly-governed area.
I’ve just sent him the link. Let’s hope he decides to do a review of this magnificent example of overengineering.
Same here! I can’t help but feel AvE would do a much better teardown, not only looking at what’s there right now, but considering what’s blatantly missing from this article; what can be improved in the future! If those are machined gears then there’s no reason they can’t cut cost by sintering. The article says it’s so expensive to apply “thousands of pounds of pressure” completely ignoring that any home-gamer clamp can do the same.
It’s still an over engineered and expensive clamp, but I feel like the article is dishonest about where it is and where it can go.
A lot of these “unnecessarily machined” parts look to me like they were designed for casting but the tooling wasn’t ready for the scheduled first run (typical). They would be a lot cheaper in mass production.
The gears are OEM, they are not that expensive when you buy them by the thousand. You’d need to sell a lot of juicers to break even on sintering tooling for them (and machined gears are stronger), doesn’t make sense for a commodity part.
Overall I refuse to believe that the engineers who had enough skill and experience to design this and get it working are so blissfully unaware of production process costs. Simply doesn’t happen IRL.
Number 1 and Number 11 seem to be contradicting each other, but I suppose you could interpret #1 as saying “Don’t just say ‘I’ll fix it later’, but write it down.”
Whenever I encounter a situation like this, or a feature improvement that would be good in this spot, I always write a TODO, which is what #1 is saying. You need to keep track of these things or they’ll get lost.
Also, when on teams, I try to promote the format # TODO(sfz-): thing to do 2017-04-25 so that if someone has time to pick any up, they know who to contact for details or whether the issue might be stale or irrelevant.
The problem with that is people sometimes leave the organization. A slightly better version is to include an ID for your issue-tracking system that has more details.
My main hobbies outside of programming (which I still do at least a day or two a week outside of work) is powerlifting (squat, deadlift, bench), board games, video games and hiking. Although I also spend a ton of time on Youtube just watching videos about topics I’m interested in at the moment.
I really, REALLY hate this article. With a blind seething passion.
Authors that entitle articles “Ruby XXX” and then spend the ENTIRE article talkins about Ruby on Rails should be … I dunno, given 1000 lashes with a wet noodle or something.
Really. Why oh WHY do people equate one to the other?
It would be like saying “C is terrible because UNIX sucks!”. What does one have to do with the other beyond the fact that it was a tool that was used.
Most of the errors in Rails are
undefined method XYZ for nil:NilClass, this is a whole class of bugs that compilers have been able to detect for years and years.
How does this not have anything to do with Ruby? It’s a Ruby error message.
The article mentions “Ruby” in its title, and as a reference to the Ruby is still great, that’s it. The rest is just a rant about how the author has never seen a clean Rails application…
This reminds me of a recent article on Clean Code about managing errors, and nulls (or nils in Ruby’s case):
Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.
And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!
(please check the original article for context.)
Why should the programmer be responsible for preventing an error that the computer could easily prevent for you?
I’ll reply with a question too: when the type-safe program crashed, was it a computer error or a programmer error? :)
Depends on the nature of the crash. If it’s something the type system was supposed to prevent, it’s either a language design or a language implementation error (or both).
Think of it as the null pointer exception of ruby.
I very very rarely see it.
Why? Because I haven’t swallowed DHH’s (author of rails) bullshit about TDD being bad. If you listen to the others in the Ruby community and use TDD, you very seldom see that in a running app.
I don’t think that’s a class of bug that most compilers detect. Somehow people still get NullPointerExceptions in Java.
Try getting a null pointer exception in Haskell or Rust, it’s really only possible if you tell the compiler to let you do it.
I edited my comment to say “most compilers” because I realized that was the case. Overwhelmingly web applications to-date have been written in languages where that is not the case. I -think- the author is clumsily making a case against Ruby’s type system, but it’s poorly-expressed here.
I get it that many consider that a language can eliminate an entire class of errors to be advantageous/superior. It takes some really generous interpretation to get from the author’s gripes to that argument. If that’s the point here, it’s not cogent.
He’s essentially saying “I see segfaults all the time when I use C. Why doesn’t Unix use garbage collection?”
Yes, this is a Ruby error message. However, the author is ranting about rails, and in fact not actually expressing anything interesting about the Ruby language itself other than “It’s not strongly/statically typed”. Duh! Want static typing? Go use another language :)
The reality for me over the past 10 years has been that in any Rails application, 80% of time is spent untangling messes to figure out how to make a change, or fixing bugs. Only 20% of time is spent on actually adding value. That ratio should be flipped around, and there are other languages and frameworks out there that try to do just that.
The poster mostly debates how they have maintained only bad Rails codebases. Nothing about Ruby.
Well, it’s not Rails idea to have nil in the language, nil is a Ruby choice, and the author does state that that’s where most problems come from.
That’s clearly not the primary issue the author is trying to highlight about Ruby, as every major language used right now has some concept of nil.
[Comment removed by author]
[Comment removed by author]
It pretends to have an option type and you can use potentially nil references in that way if you go out of your way to do so, but there is nothing stopping certain code from just dereferencing nil. It’s a good step and done pretty well, but there still is nil.
Even still, the original author was not at all even a little bit claiming that nil values are the key problem in Ruby that makes it worse than other mainstream programming languages.
There are no deadlines,
But…
you’re held accountable by your commitment to your team and forward progress.
So commitment != deadline? I’m curious what “commitment” means for them.
Also
there are no managers
For a “small” team (doesn’t say how small), that may work. For a while. Every time I see something like that, though, I just think of GitHub and other manager-less environments that turned toxic.
no HR department
Combined with no managers?
You’re free to contribute wherever it’s effective
I’m curious who determines what “effective” is?
This is the kind of thing that makes me wonder if there’s a lack of “adult supervision” that will be paid for later. I know this can work for “small” teams, but I haven’t seen the manager-less + HR-less work in the long run.
So commitment != deadline? I’m curious what “commitment” means for them.
This means if you commit to implement something, then you take it to completion, and if you can’t for some reason then you ask your team for help.
For a “small” team (doesn’t say how small), that may work. For a while. Every time I see something like that, though, I just think of GitHub and other manager-less environments that turned toxic.
No doubt things may change if we continue growing larger as a team. Even at the 12 we are now, there are constant challenges to ensure everyone on the team is happy and effective. In many ways I am the manager, but we have other mechanisms as well to identify issues before they become severe. One of them is quarterly peer reviews where we each score ourselves plus everyone else on the team. This is done in an anonymous fashion during the review process, however we openly discuss the aggregate results. We also have regular one-on-one’s with team members and myself, which could further reinforce the notion that I am the manager.
I’m curious who determines what “effective” is?
Per above, the team as a whole through peer reviews, with me stepping in for one-on-one’s if necessary.
This is the kind of thing that makes me wonder if there’s a lack of “adult supervision” that will be paid for later. I know this can work for “small” teams, but I haven’t seen the manager-less + HR-less work in the long run
We make it work now. I have no idea if it’ll stay this way forever, but it works well for us right now.
I think the team is small. Check the other post : https://blog.dnsimple.com/2015/09/retreat-avignon-august-2015/
While we are 11 men and 1 woman at this point, we are far from homogeneous. We vary in age, religion, nationality and beliefs. Sex is not the only thing that makes us different.
I know this can work for “small” teams, but I haven’t seen the manager-less + HR-less work in the long run.
Not sure why they should be pressured into building something “for the long run”, if something makes a group of people happy and they can pay their bills, why isn’t that enough?
Well, we do have our customers to attend to as well, and they care that we are stable and will be around to take care of them, so that’s what we always aim to do. :-)
For a “small” team (doesn’t say how small), that may work. For a while. Every time I see something like that, though, I just think of GitHub and other manager-less environments that turned toxic.
Every time I see someone bring up team size & Github’s more recent disarray, I have to consciously stop myself from throwing a fit and screaming “correlation != causation”!
1) Re: Github, the simplified version of the situation was that this was due to VC-initiated VP/Director-level management reshuffling. VC’s wanted “adult supervision” in charge to lead Github to an “exit”.
2) Re: “it must only work for small teams” (and associated mentality), see: https://en.wikipedia.org/wiki/W._L._Gore_and_Associates#Culture (I specifically avoided using Valve as an example, just to show that even in industries other than software, this is possible).
I think the takeaway here is that more than size, it’s the people that makes up the org. Have the wrong (or rather, a “different”) set of people setting the culture/tone, then a previously loosely allocated organization can quickly fall in line to resemble a more traditional centralized org.
For a “small” team (doesn’t say how small), that may work. For a while. Every time I see something like that, though, I just think of GitHub and other manager-less environments that turned toxic.
That seems like an odd response to have. Do you think that manager-full environments are never or rarely toxic? That has not been my experience.
When I’m talking to people who are evaluating Haskell and trying to determine its risks, these things never come up. It’s always the same thing which can essentially be summed up as “library support”
The flaws Mitchell is addressing here real flaws that affect people that are already pretty deep in Haskell-land. All of the above still have unsatisfactory answers IMO, even though it certainly can be done.
Can I do MVC web dev?
Yes. Most of the Yesod apps I work on have models, controllers, and views. I used Clojure and Python (Django) before Haskell, so I know what people expect here and it’s fine.
How do I keep the code in sync with the database (migrations)?
http://hackage.haskell.org/package/persistent ( I streamed some of this and web dev last night: https://www.youtube.com/watch?v=uYXX1t3GrsE )
How do I do performance (response time) monitoring?
http://hackage.haskell.org/package/datadog
http://hackage.haskell.org/package/statsd-datadog
Can I still use papertrail (or similar logging aggregation/search service)?
http://hackage.haskell.org/package/katip-elasticsearch + treasure or whatever you want really.
What’s the equivalent to getsentry.com for haskell?
Sentry. http://hackage.haskell.org/package/raven-haskell Might could use some fixing up, but I’ve used this in prod and it was fine.
How do I do continuous deployments from <my favorite CI service>?
rsync a binary from your CI build/run ansible, bounce the upstart daemon. Our deployment stuff is in Ansible.
Also see: http://haskelliseasy.com for other questions like this.
In my opinion, Haskell is a language completely suitable for production if you want to use traditional paradigms.
But Haskell has a lot more to offer.
There are many paradigms shift done in Haskell.
For example, you can take a look at transient (see this recent thread for example):
https://www.reddit.com/r/elm/comments/4wq3ko/playing_with_websockets_in_haskell_and_elm/d69o11p
You might dislike the syntax, and believe that the example is quite minimalist. But it show that while MVC has its virtues, it is not the alpha and omega of web development and there can be another way of thinking about a web application as we are used to.
I agree with DHH’s statement that programmer happiness is all that matters. However, I haven’t met a Rails programmer that was happy writing Rails in, probably years. Everyone writing Rails constantly complains about how nothing works like expected, all apps become unmainable nightmares etc etc.
If programmer happiness matters to the Rails team, what’s being done to increase it?
I’ve said it before, but I wish the Rails team would invest more resources into maintaining an LTS branch. It’s easier to create happiness when a developer can just learn the tool once and get back to work without having to constantly learn the new version, spend time migrating code to it, fix broken tests, deal with incompatible plugins, etc. Unfortunately I think a lot of newer developers equate a stable project with a dead one, and they constantly want new shiny things.
I’m a happy Rails developer because most of the projects I work on are stuck on old versions like 2.3 and 4. I never have to think about upgrading Rails unless it’s for the occasional security update, so I can just focus on writing my application. If I thought about what it would take to pull all of these things up to whatever the current version is, it would make me unhappy.
Things like PHP succeeded because the language/interpreter moved very slowly (although this was bad for security, because certain things needed to change quicker) so developers could write code that still worked years down the road. This is kind of why I am apprehensive about learning Swift for iOS projects once it was open sourced - I think there are too many cooks in the kitchen trying to form the language around what they want, and it’s going to be a moving target for a long time.
IMO, it all depends on scale. Once the experiential barrier is overcome, Ruby provides short-term programmer happiness with its object-oriented/procedural orientation and eclectic grammar (deriving from Perl and Smalltalk), while Haskell provides long-term programmer happiness with its extremely versatile type system and pure functional-ness. Each has a clean syntax, but the underlying paradigms make for radically different programmer experiences. Ruby is great for churning out code that’s fun and natural to write without jumping through functional hoops, while for more sophisticated tasks it may perl in comparison with Haskell’s strong maintainability and reasonableness. That’s just one example, but you might see how what makes me happy may not make you happy, and vice versa.
Nevertheless, I wouldn’t call this a failure of Rails to make programmers happy per se; maintainability is a whole other problem that operates on an entirely different scale.
A big part of this is because frameworks try to cram all of programming into a tiny color-by-number box and call it good. Eventually what you’re doing doesn’t fit so well, but you’re already stuck there, so you just hack atop it.
Rails has enabled lots of projects to get bigger than they would have without it, but it doesn’t fix the fact that you need to know architecture eventually.
I’ve worked professionally for the past 5+ years on 7 or 8 projects that used Rails in various capacities. For me personally, Rails is awesome when you aren’t stuck using all of it. If you’re using the routing/controllers, sticking your business logic in a service layer and leaving everything else out, it’s a joy. While there is some small level of efficiency you gain from all the automagical stuff, my opinion these days is that the opacity hurts more than it helps.
Not to mention the… trendiness? Fad-culture? I’m not sure what the right way to label it is, but if you’re stuck trying to decide whether it’s fashionable this year to use helpers vs decorators (or was it presenters?), and wondering whether _changed? methods are deprecated this month, Rails can be quite a nuisance. Did “rake db:test:prepare” get undeprecated? Oh, lovely, I suppose we can use that again. Need to build an administrative backend for your own people? I guess it’s cool now to use ActiveAdmin, a deus ex machina gem that somehow attempts to what, simplify writing CRUD apps? I thought that’s what Rails was for.
</rant>
On a somewhat more serious note, I have worked on a very large Rails project where the performance of the application was perhaps the single biggest problem we had - largely ActiveRecord generating complex and inefficient queries which ended up being tediously re-written by hand. This wasn’t a Twitter-scale application, either. The Ruby language wasn’t at fault, but pieces of Rails certainly were.
While it may or may not be impossible to remedy the programmer happiness problem within Ruby/Rails, I think we’re missing something if we only look within that community. Jose Valim created a language and community that is making people really happy. Elixir was heavily influenced by Ruby, and (at the moment) it nails the programmer happiness point - so to say “Ruby/Rails makes people sad” is a little bleaker than reality, it certainly fostered some good ideas that made their way into other microcosms. I’m sure there are other examples, but the point is that all software ends up becoming unwieldy and unpleasant, but the nice aspects generally make their way into other projects.
Rails dev here at Shopify. Still loving it!
We’ve still never had a complete rewrite but have been really good about keeping tabs on tech debt and making sure we’re running as close to edge as possible.
What’s the likelihood that IBM actually supplied some way to generate true random numbers here? That would at least justify some of the time spent.
Hi, I’m the author of contracts.ruby. This is a well-written post! There’s also a great gem from Simon George that will auto-generate documentation from contracts: https://github.com/sfcgeorge/yard-contracts
If I understand it correctly, the contracts will still create exceptions at runtime right? Is there any tooling for trying to catch some of the errors statically?
Correct. There are various incomplete tools for static type-checking. Here are two:
https://github.com/michaeledgar/laser http://www.cs.umd.edu/projects/PL/druby/
They are all partial checkers, because it is impossible to typecheck Ruby statically. I have been thinking about writing something that uses contracts to do partial type-checking as well.
There’s also a language with Ruby syntax that has static typing: http://crystal-lang.org/
Haskell’s great. It isn’t hype. If anything, there’s “an embarrassment of riches” when it comes to the library ecosystem. I’m still figuring out the web frameworks (Yesod, Snap, Scotty) and which to invest in.
I know of about 20 companies that have gone the Haskell route and none have regretted it. Of them, only one ended up moving away from Haskell and it was toward F# because they wanted to be on the .NET platform (it was a financial firm whose traders used Excel pretty heavily). It wasn’t about regret, just specific convenience.
I’m investigating Haskell for a mid-size Chicago company and I’m finding the language to be a joy to work with. It has all the things I liked about Clojure, but a type system that gives me confidence in ramping up to a larger team, which I wouldn’t have as much of with a dynamic language.
Though, the types do pay off long before an add'l human is entered into the equation.
I don’t like salesmanship for PL. I’d rather languages be able to stand on their own merits. If your ROI curve is beyond the horizon, then it’s hard to know. I’ve chosen to focus on making Haskell sufficiently straightforward to learn that they can kick the tires and see what people mean. That’s likely going to be more convincing than burning peoples' credulity up before they’ve fired GHCi up with what, they can only assume, are outlandish claims.
I’ll probably get downvoted for this, but – it’s not because Haskell is that good. It’s because everything else is that backwards. The road doesn’t end with Haskell. PL researchers haven’t been asleep at the wheel even if industry has. More stuff in the pipe to keep us busy for the next few decades.
But as a 9-5 programmer? It’s great. I use Haskell at work. I’m extremely happy that I do. And you know what? There are still messes. They aren’t Haskell’s fault, but Haskell means that for my small team they’re still tractable and it eliminates a lot of additional risk & pain. I’ve flipped multiple companies over to Haskell (contracting and amateur) and they’re all happy customers too. I didn’t flip those companies by pitching the fuck out of Haskell, I did it by having acquired enough credibility with them that they were prepared to investigate and evaluate Haskell themselves. Only in one case did I teach them anything and that poor CTO got the worst Haskell tutorials I’ve ever given. The tutorials were bad because I hadn’t taught many people Haskell to any real depth. Two years later, that has since changed. Despite my terrible tutorials, the CTO kept with it because Haskell was doing the sales, not me.
I propose we skip the sales pitches that turn reasonable-minded people off (isn’t that who we want in our community?) and instead consider this way of putting it:
“It’s not perfect, but we think if you check this cool thing out and plumb its depths properly, you’ll have discovered something really cool that you’ll want to use in your work or personal projects. I’m here if you want to ask questions or get details.”
I think that’s going to get things off on the right foot more often than over-eager marketing pitches.
An anecdote: I work in a coworking space. I heard a marketing dude make an outlandish claim about AI. Something to the effect of, “My company is the only provider of True AI™ in the market today”.
Now, given what I’d just heard, I can arrive at one of three conclusions:
He’s incompetent and doesn’t know anything about AI. His claim is overreaching but the engineering might be sound.
The engineering isn’t sound and he’s lying to cover for this.
The engineering is sound but not spectacular and he’s lying because he’s a marketer/salesperson. This isn’t really better than the other possibilities because the main thing I need to know about a product or tool is its limitations. Limitations requires more depth of experience than benefits so it’s best if you can trust your vendor. Cf. databases
They’ve actually invented AI but their team is so utterly disconnected from the research community that they haven’t published and so incompetent at business that they’re pitching their thing at a coworking space in Austin, TX.
Maybe there are more gracious conclusions I could’ve arrived at, but the real take-away?
I didn’t bother to ask him to elaborate. I got tea and left.
How many people are getting their tea and leaving because a tool that had a lot of value to offer was oversold?
Apologies for the wall of text. Back to writing about Monoids.
Yes, very much, to all of this. Especially to the analogy.
I wrote something here a while ago, complaining about this from the other direction: Why does everyone complain about monads being difficult to understand, despite the general consensus among experienced Haskellers that monads aren’t especially important for newcomers to learn anytime soon?
Because the hype that got people interested in Haskell in the first place mentioned monads prominently.
I would appreciate if people would stop doing that. :)
Because the hype that got people interested in Haskell in the first place mentioned monads prominently.
Yes! This stuff agonizes me. Getting people to stop fetishizing monads (as if they could be learnt without knowing the language) is a regular chore of mine.
Yes, you’ll learn them later. No, it doesn’t matter right now. You can cargo-cult the do-syntax and figure it out later. Monad analogies do more damage than temporarily ignoring what do syntax does under the hood. Trying to dive straight into Monad when you don’t know anything about Haskell has burnt so many people out unnecessarily. It’s a terrible unforced error to make in teaching people something.
The mere idea of a single article monad tutorial with no prerequisites isn’t well-founded and should be avoided if at all possible.
I would appreciate if people would stop doing that. :)
Me too!
Funny story, I know a mathematician that uses Haskell and she’s never once used Monads (unless you count GHCi’s implicit IO + do loop) because it’s never been relevant for her work :)
In my tutorials I always explicitly tell people to not care about Monads, it’s just a mathematical concept and set of rules that aren’t really relevant to your day-to-day programming. I think because Haskell people tend to be the type of people who wants to understand things and learn things deeply, we tend to teach that way as well.
However, a Node.js programmer doesn’t care about how the Event Loop works even if they use it every day. A JS developer wanting to learn promises doesn’t go and learn how they work and what their computer science fundamentals are, they just look at the tutorial on how to use them.
So when I teach Haskell I teach people how to use Maybe, how to manage state in their application and how to do IO. You never need to know the underlying principles, and when people are comfortable with using things, they typically 1) learn the underlying principles much easier and 2) do so from their own interest instead of being told that this is something they should learn.
You never need to know the underlying principles, and when people are comfortable with using things, they typically 1) learn the underlying principles much easier and 2) do so from their own interest instead of being told that this is something they should learn.
Yeah, this is something I’ve noticed with some of the more mathematically minded folk that haven’t taught much. They think learners can work backwards from formal definitions to application…which is bonkers. If you have a means of kicking around Haskell (with help), then you can probably ignore Monad, but we focus on simply trying to do a good job of teaching the concepts in our book.
Rather just slay the dragon, ya know?
I completely agree with all of what you are saying, and this post was actually my attempt to highlight how, despite our built up technical debt and focus on things not-haskell, Haskell worked out for us. I was hoping it would be more of an anecdotal story than a pitch, perhaps I failed in my mission and it sounds like a pitch? Would love feedback on how to improve the story-telling so it isn’t a pitch.
That said, this:
“It’s not perfect, but we think if you check this cool thing out and plumb its depths properly, you’ll have discovered something really cool that you’ll want to use in your work or personal projects. I’m here if you want to ask questions or get details.”
is unfortunately sometimes not enough because the people you are trying to convince (sometimes) are not programmers. They care about things like what monetary benefits it brings, how easy is it to hire, will it scare off any potential acquirers? How does one convey that without making it sound like an outlandish pitch? I can provide examples for all of those things but the response is always the same “so why isn’t everyone using this if what you’re saying is true?”
I was hoping it would be more of an anecdotal story than a pitch, perhaps I failed in my mission and it sounds like a pitch?
No no, you’re fine! My comment is not a reply to your post.
is unfortunately sometimes not enough because the people you are trying to convince (sometimes) are not programmers.
Yeah, I was talking about technical people that have a tendency to be a bit skeptical. Management, IME, been either very receptive or very unreceptive to Haskell. This often depends on whether they have a growth mindset.
How does one convey that without making it sound like an outlandish pitch? I can provide examples for all of those things but the response is always the same “so why isn’t everyone using this if what you’re saying is true?”
There’s a (game theoretic?) answer to this – if it were easy/cost-free, everybody would be using it. Pretty much by definition, if there’s an advantage/benefit to something and it costs nothing then it would be wide spread. You could talk about it terms of a technical “moat” similar to the PG essay. Or in terms of Moneyball, but that sort of pitch will make programmers grumpy if they hear you make it. I took a (temporary) salary cut to be able to use and teach Haskell in my work, so I’m sanguine about it :)
My comment was more of a “simma-down-nah” to over-eager programmers who don’t have that much depth in Haskell sprinting around news aggregators to tell everybody they should be using Haskell. Again, not a reply to your post.
I’m very glad you shared your experiences, please keep doing so! Cheers :)
Ditto what bitemyapp said - your post was good! I was more bemoaning the state of things in general. :)
It sounds (from your other comment) like you need something that compiles to JS. There’s PureScript with FRP libraries. There’s also pure JS stuff like NuclearJS that comes close to FRP even if might not be it in strict terms.
If you decide on PureScript, then depending on the flavor of FRP you want, you might also like
yeah, this is what i’m tossing up between; things like PureScript and things like Elm.
it seems perhaps wiser to go with PureScript and join in with other libraries if i need it; with Elm I seem to be pretty married to the Elm environment completely.
I agree, while the overall sentiment is probably something most people will agree with, there is no attempt to create a delineation between when to chase something new and when to not. Rewriting your entire product every time a new JS framework comes out is one extreme of it, the other extreme is to try to write everything you do in C, C is very well-understood and very standard, so that means it’s better than everything else?
The interesting thing to talk about is where to draw the line, but of course that’s more nuanced than you can fit in a single blog post and additionally requires knowledge of the problem domain.
Agreed - real issue, needs nuanced exploration rather than broad statements. It’s impossible to take the article seriously when it’s proposing a simplistic categorical rule about something so important and subtle.
A meaningful exploration would be about how to decide which parts of your own project should be new vs. old, with case studies on both sides of the line.
The article’s view about regular penetration testing is interesting. Understandably, from the perspective of a business, regular penetration testing is an expensive waste. In the best case, it simply costs money to run the tests or hire an external evaluator to run the tests. In the worst case, it uncovers security holes which require even more money to fix. But truly, this is better in the long run for both businesses and users.
Obviously, users benefit from the knowledge that sites on which they have stored or used their credit card information are regularly tested for security vulnerabilities, providing some assurance that vulnerabilities are not present.
Companies benefit by avoiding costly and damaging loss of user data due to a security breach, and (we always hope) the fines and penalties that come from the government in investigations afterward.
It is the responsibility of businesses working with this (and other) sensitive information to meet reasonable standards of security, and I believe that includes regular penetration testing.
As a final aside, I will say that recently some friends discovered a major security flaw in a system we use which they reported, and which has since been fixed. The flaw compromised private records of all users, including potentially providing access to credit card information. It likely existed in the system for years, and would have likely been found by any qualified outside penetration testing. There are real problems with the current status quo on this issue.
I actually agree and for most (definitely not all, but most) businesses that have more than 20k transactions it will be feasible to do yearly pen testing.
My problem with the requirement is more in the phrasing of it. It’s yearly and
after any significant infrastructure or application changes to the environment (such as an operating system upgrade, a sub-network added to the environment, or an added web server)
Unless you get a lawyer and actually go to court to define what this means, it’s completely open to interpretation. It could mean that any major version of your code needs a new test, any change to your infrastructure needs a new test, patching your operating system (especially on windows) happens monthly usually, and if you’re in a growth phase then adding servers is common. For a startup where change is the norm and especially if you’re in a growth phase, the worst-case scenario means you have to do weekly pentesting, which is obviously absurd. I don’t think the PCI council would require such an extreme, but it’s the language of not only this requirement but a wide array of other requirements that make it practically impossible to comply fully.
Ironically, “square peg in a round hole” is the first thing that comes to mind. At some point I think I would consider reevaluating my software platform. what makes os x an irreplaceable component here?
They mentioned they use builtin graphics libraries in OSX. I imagine it’s done for stability reasons and an interest in consistent hardware performance. They might have to bring in another company or have a new job focused on building and configuring those systems if they used custom boxes with a Linux flavor.
One nice bit about what OSX does is compiling the filter graph into one pass that’s then executed on the GPU. The typical Linux image-processing tools people run (ImageMagick, etc.) don’t do that. Halide does do it, but it’s quite recent, and a lot rougher around the edges (it’s code from a PhD thesis, though much better than typical “research code”).
The imaging pipeline is vastly superior to anything currently available on Linux. The real question is why not use Windows, which has if anything a technically superior story to OS X, and is certainly much more at home in a datacenter.
I’m really interested in why (or what in) OS X is better than Linux as well as why Windows would be better than OS X. Do you have any resources to read more about this?
Reminded me of http://macminicolo.net/ .
I just finished a major refactoring push of a Rails app on Monday and will be spending much of the rest of the week reading through documentation and API references of Stripe/Recurly/Chargify to see what difference they have from a technical POV. Will also spend some time writing an EPL2 parser in Rust.
I also understand that cabal sandboxes are a thing. I don’t care about that either.
Literally all of his issues can be fixed by using sandboxes, I don’t understand his argument that while using sandboxes you can somehow accidentally mess everything up, that has never happened to me.
AFAIK, when you’re in a sandboxed environment Cabal will disregard your global installs when building? Maybe I’m wrong on this.
I had problems when sandboxes initially came out. I’d sometimes forget that I didn’t create one when working on a new project, do a cabal install and then have conflicting global packages. I was really happen when the require-sandbox flag was added later on.
No, it doesn’t disregard the global installs. It can’t, because base is part of the global installs, as are the other libraries which are required to build GHC.
In an ideal world, one would only ever have these very minimal libraries installed globally. Even that is still a problem, because it means you incur the “why isn’t it using my newer version” problem if you ever try to upgrade one of them… The answer is generally because you transitively depend on the GHC API, which depends on exact versions of the core libraries, but knowing that doesn’t actually lead to a solution other than build your own GHC.
As a community, these small things are hurting and they need to be much, much higher priorities to fix.
Or move to Nix instead of Cabal. That’s honestly my first-choice outcome. :/
Or move to Nix instead of Cabal.
Cabal sandboxes have been fine for me, I’ve never once been able to get (with copious help from experienced Nix users) a simple Haskell project to build in Nix.
No, it doesn’t disregard the global installs. It can’t, because base is part of the global installs
OP probably meant user package-db. I should write a post explaining all this sometime - the relationship of global <> (user | sandbox) package-db isn’t very clearly explained.
“Works for me” is a very low bar. Agreed that Nix has a ways to go; that’s why it’s a pipe-dream.
I consider the user package-db to be global in the same way as the system package-db is. I understand the distinction, but neither is a suitable place to install anything, ever. :)
My problem with sandboxes is that, if I have five related packages, all written by me, which depend on each other in some fashion… and I want to make a change to one that’s high up the graph, then check its impact on the others… I wind up building a great many unrelated libraries once per package of mine.
Sharing a sandbox reduces this load, but at the cost of making it very easy to make mistakes as to which versions of one of my packages the others are being built against. It also still doesn’t solve the problem that there’s no real way to transitively unregister everything that was pulled in for the benefit of a later-in-the-chain package, for the purpose of getting a clean sandbox state to test the earlier-in-the-chain ones.
The last time I had this need was mid-2014, and nested sandboxes were being discussed as a possible solution. I’m not sure if they’re implemented yet. I think they’d solve most of my issues if they are, but would need to try.
“Works for me” is a very low bar
Less so, when I’m not even really speaking exclusively about myself, but also about the >100 people I’ve helped in IRC. The only time I’ve seen sandboxes not work is a recent (~48 hours ago) case with a Windows user.
Agreed that Nix has a ways to go; that’s why it’s a pipe-dream.
The problems are not purely with implementation. There’s a difference between what people need an OS package manager to do and what people need their project dependency management to do (Maven, Cabal, etc.)
Example: your project uses an older version of a library than what is in the nix-pkgs repo. What do you do? You manually create a Nix package out of the version you need. What if there were problems in the past that required extra (more than the packaging itself) work to make the Nix package work? You get to repeat all that because Nix packaging isn’t designed to solve the same problem as Cabal or ghc-pkg.
I consider the user package-db to be global in the same way as the system package-db is. I understand the distinction, but neither is a suitable place to install anything, ever. :)
I tend to agree except in the case of the aforementioned Window user. They had to use their user package-db because there’s something weird going on that I need to put a repro together for. Basically GHC couldn’t “see” the packages in the sandbox path and we’re not yet sure why. Global package-db - absolutely never should that be used. Platform still uses the global package-db for non-critical stuff and it causes package conflicts for new users all the time.
It also still doesn’t solve the problem that there’s no real way to transitively unregister everything that was pulled in for the benefit of a later-in-the-chain package,
This is annoying, yes. One way to deal with this is to either change your project constraints to make things line up the way you want, or manually add the constraint, then install with said constraint (cmd line or Cabal file) and force reinstalls in the sandbox. That’ll reinstall the specific dep and the dependencies it affects as needed. If you want to force specific versions without putting them in your Cabal file, you can use a Cabal config/freeze to specify specific versions, then cabal install –force-reinstalls.
unregister everything that was pulled in for the benefit of a later-in-the-chain package, for the purpose of getting a clean sandbox state to test the earlier-in-the-chain ones.
Some of this could be addressed by having sandboxes shelved somewhere and symlinked to. Portable/movable sandboxes is one of the fixes currently being considered and would help here too.
nested sandboxes were being discussed as a possible solution. I’m not sure if they’re implemented yet.
That isn’t on deck at the moment, I don’t think the feature has any advocates and current maintainers are likely to shy away on account of apparent complexity. If you speak up in the Github project it’s possible things could change or at least an understanding of why it isn’t a good idea could be arrived at.
It would really help the tool maintainers a lot if they heard more from end-users, especially from people who are at least partly repelled by the current tool situation. It would help the people advocating for improvements if there was more evidence of malcontentedness with the current UX.
Your use-case was “build one package in a sandbox with its dependencies”. Indeed this works all right (finally). Serving the needs of beginners is important. Serving the needs of experienced users is also important. That does not adequately describe what I tend to need, and I think I explained how.
Re: Nix not having old versions. Yes. I don’t know in full detail what a Nix solution would look like, but I agree that having all versions rather than only the latest is a requirement for it to replace Hackage, and is not currently in place.
Yeah, it sounds like the Windows scenario is a special case of some sort; I hope you’re able to repro it.
Yes, I know how to do those workarounds, and I used all of them extensively. I’m happy to not have the need at present.
Yes, portable sandboxes will be great, but they’re certainly several layers of architectural fix away; right now, you can’t even move Cabal-built .a files to a different directory in the general case because they hardcode the paths they’re meant to be installed at via a _paths.hs file generated behind the scenes, and there’s no way to know without reading its source whether a given library makes use of that information or not. This is unfortunately a feature that there’s no other easy way to achieve in certain situations, so it’s not clear what the fix should be. When that problem is solved, we can continue to peel the onion and fix the other blocking issues. Portable sandboxes would make me much happier but are not a short-term solution.
Honestly, I spoke up last year and I generally stay away from these conversations once I’ve said my piece, because the Haskell community loves to say “but you can’t do this concrete fix because of this abstract concern that only two people present understand”, and recurse on that a few times, and it’s intensely frustrating and I choose not to engage.
Serving the needs of experienced users is also important.
Sure, I’m an experienced user too. I’m also working with a larger sample-size than most experienced users WRT beginner and intermediate experiences because people complain to me or ask for help in IRC.
Portable sandboxes would make me much happier but are not a short-term solution.
Oh for sure, but at least being able to say, “download this and you’ll be able to build our stuff” would be pretty cool. It also means we’d have a plausible means of providing what Platform does without polluting the global package-db.
loves to say “but you can’t do this concrete fix because of this abstract concern that only two people present understand”, and recurse on that a few times, and it’s intensely frustrating and I choose not to engage.
Yes I’ve been through this a couple times and it’s partly why I’m trying to keep my eyes peeled for people willing to help me lobby for what needs fixed. Thanks for your past efforts, hopefully the welcome mat is out next time you spend some time with Haskell.
I’m certainly not trying to insult you or call you inexperienced; I’m aware that you’ve been around as long as I have, and I try to avoid such attacks when I spot myself making them, in any event. The examples you’re giving, of things that are achievable, do not meet my needs. I don’t think the strength of the claim that they are achievable is at issue. :)
Yeah… I hear that. Sympathies!
I’m certainly not trying to insult you or call you inexperienced
I know you aren’t, but I sometimes experience that thing Kindergarten teachers get where people talk to them like they’re kids because I spend a lot of time teaching new people.
do not meet my needs.
I would like to understand those needs better as they are outside what I’ve run into, but it’s not my place to ask for any more of your time. What you’ve offered so far has been appreciated.
I’m passing out upvotes all around. Have a good day :)
I thought this was a great talk! As a long-time Rails dev turned Haskell dev I’ve seen all the problems that come with large codebases in Rails and have had many of the same thoughts but not as well expressed as in this talk.
I really wish we could have support for gradual typing in Ruby like Flow/TypeScript for JS.
Or even just some decent static analysis tools that could enforce/analyze the patterns in mutable/immutable/nofx/fx.
This talk made me hopeful that maybe some time in the future it can be possible to write a large ruby codebase that doesn’t end the way they usually end up.
We might see some static typing features.
And, yeah, one reason I emphasized several times the idea of immutable objects not calling mutable ones and nofx objects not calling fx ones is that I can’t see any way to enforce those rules. It’s probably possible to write a Rubocop rule ensuring they don’t refer by explicit class name, or argument with the exact name of a mutable class, but that’s pretty flaky. If Ruby didn’t have “freeze” something like it could be built in that would work against anything but an explicit attempt to subvert it (which is the level at which Ruby enforces its “private” convention)… but I think it would be impossible to prevent an object from having side effects.
I’m glad it gave you some hope. My hope is to help the wonderful Ruby community be happier writing more reliable and maintainable software, and this talk is my first big step.
I think the author may have looked into the technology and found it to not add anything (which I argue is false, one of the biggest benefits is inter-corporate communication, letting companies have their own clients and “logic” for contracts/payments but still be able to interop in a standard way). This might ultimately be a matter of opinion because there are many ways to solve something and blockchain isn’t the only way.
But his last statement of no-one trying to sneak this into orgs to solve problems is definitely false, I know people working at NASDAQ to use blockchain tech, J.P Morgan is doing massive investments into blockchain tech, there’s things like the electric car charging network, you have consulting companies like Consensys developing blockchain based solutions for companies and they’re not short of customers.
If you actually followed the community in any detail you’d see thousands of people trying to sneak the tech into their companies. I have no idea whether they’ll succeed or not, but they sure are trying. I don’t know how the author can justify saying that this isn’t happening “in the slightest”.
But why? Why are they trying to make something simple so complicated again?
Handling a distributed network of semi-connected machines that need to sync up to a consistent state is hardly simple? I think you should try programming that yourself first before you say “why would you use an out-of-the-box solution for this?”
There’s out of the box solutions for that which outperform Bitcoin etc on creating, updating, or deleting records. They can be cheaper to operate. They can use existing currency and banks. QED.
Absolutely, that’s why I would never use Bitcoin for it, but there are many other use-cases of blockchain technology than Bitcoin. Bitcoin happens to be one use, you can’t condemn a whole area of technology because you don’t like Bitcoin. That’s like saying you’re never going to use the Raft algorithm because Docker Swarm is using it.
Neil Postman has some interesting stuff to say about this, as does Evgeny Morozov in “To save everything click here”.
This as an antidote to the knee-jerk response to every problem, real or imaginary, to solve it with ever more technology without considering whether it is appropriate, proportional or actually solves the problem! I’m also guilty of this, because technology is cool, but I try to be aware of that.
So whether or not Bitcoin/blockchain is great awesome elegant technology is not relevant, as it doesn’t solve any real problems that can’t be solved in a much simpler and cheaper way.
Formal investments and large consultancies developing solutions is emphatically not sneaking it in. Instead, that’s formal projects with real money to fund them and consultancies knowing what they can charge for maybe managing to understand slightly better than their customers. Computing history is littered with such majorly hyped projects, funded in such ways, which have seen some traction inside this sort of customer. DCE (Distributed Computing Environment) is one such example, which has seen traction inside financial institutions, but is hardly a mainstay of typical computing today.
The author is speaking about the tech which gets snuck into companies and embedded as critical before management finds out about it, because the people at the coalface just need to get things done. This is how a lot of early Linux adoption happened, how … all the author’s examples happened, I think, but some of them predate me. :)
So sneaking something in has to be non-public and not known outside the small sphere of people working on doing it. How does the author know this isn’t happening?
I know it’s happening in several places because I know people working on these things, the author is directly contradicting my experience and has no evidence to back it up. I’m not saying I’m right, maybe I’m in a bubble so small it doesn’t count, but at least I have anecdotal evidence, the author just blatantly claims that this isn’t happening without even anecdotal evidence.
What are your peers doing with blockchain technology in their organizations?
They bought a ton of Microsoft, IBM, Cisco, overpriced consulting, and so on. I guess those are better than the competition at everything they do, too. Wait, they’re the thing holding businesses back now. The blockchain acquisitions might be the same thing later. Further, I’d look into how groups like Bank of America are patenting the hell out of any application of blockchains. It’s more likely the bankers see a new fad they can push hard to increase valuation a portion of which will be their own profit. Plus, they like controlling anything that’s a threat to them. It was a few of them, not governments or militaries, that destroyed Wikileaks when it was in peak, profitable form.
Or, it’s because banks have problems that are solved by distributed byzantine fault tolerant consensus, and blockchains happen to be one of the easier ways to implement this, and not some conspiracy theory.
There’s no conspiracy theory. Investing in stuff that might make a return is standard for banks. It might also solve a problem for them at same time. Distributed, signed, hash chains are a simpler, cheaper technology that’s existed for some time in secure auditing schemes. A portion of it done by digital notaries, too. That plus a consensus algorithm is all they need.
The combo would be faster, cheaper, and easier to assure than most blockchains. Additionally, retaining the settlement approach means they can keep transactions they’re liable for internal instead of all in a public database. The distributed database just needs to happen for exchanges and so on. They can evem delete internal logs when no longer needed by them or retention laws. Saves vast amount of CPU, storage, and energy vs blockchains like Bitcoin. Interledger even has a middle component for such a module with formal specification done already per one commenter elsewhere.