I think we have a voting system for a reason. If people didn’t care, this would have dropped off the front page, but since this is currently the #2 story, clearly some people, myself included, enjoy this type of content. If you don’t want to see a story, then hide it.
You can apply “hide it if you don’t like it” to any story; it’s a bit of a discussion-stopper. One of the values in Lobsters is that it’s got interesting stories with a fairly good signal/noise ratio, and IMHO this isn’t very good signal/noise considering it’s a pretty minor release.
I […] wonder if better filtering (e.g. “pony + release”) would help.
The end game being a special release tag for each esoteric language and personal project people post so often that others regularly call it out?
People here work on really interesting things, major updates to those are often interesting and elicit good comments. Pony is interesting, and I actually think the Pony people are hurting their cause when instead of an on-topic discussion the responses to their posts end up being “please stop spamming this” or “can we have a special filter for this?”
I’m not “Pony people”? Like, far from it? I’m just very interested in everything proglang.
But the discussion about granularity comes up again and again and I decided to take you example and see if we can make the platform better. I’m confused by your strong reaction, so I’ll let that rest here.
And no, the end game would not be that everything gets a tag, the end game would be that everything that has notable coverage here can get a tag here, so that people can filter.
As a (primary) developer of Monte, another language in the same niche as Pony, I get two boons out of this.
The first boon is the one that I reap when I actually click the link. I get to learn what, if any, interesting things have happened in the Pony world. Today, nothing interesting, but that’s okay.
The other boon is that I get a contrast, simply by not posting regular updates about Monte and giving it a version number, with Pony. This contrast helps widen the niche for capability-aware programming languages, and gives nuance and possibility to the ways in which both Monte and Pony could be used.
I do wonder about the degree to which this posting is a commercial advertisement, but I don’t feel that it is especially advertising.
We still have a few older TravisCI, Appveyor, and CircleCI tasks around, but those are all being deprecated.
GitHub actions are very nice for us as it is much easier to automate portions of our workflow than it was with “external” CI services.
CirrusCI is awesome because… we can test Linux, Windows, macOS, and FreeBSD. But most importantly, we have CI jobs that build LLVM from scratch and CirrusCI allows use to get 8 CPU machines for the job. The only CI services we tried that have been able to handle the jobs that build LLVM are CircleCI and CirrusCI.
I switched a project from a self-hosted Buildbot to CirrusCI and I’ve also found it to be great. It’s so useful that they can spin up builders on demand and do so on your own GCP account.
I started as a programmer in the early 90s. During that time, the only Windows based computer I ever owned was a dedicated one that ran Cubase for recording. Despite that, about 3 weeks ago I switched off of MacOS after 12 years and got a Surface Book 2 that I’ve been using with WSL2. There are some weird and hinky things with Windows 10 that bother me but all in all, WSL2 is a better development experience than MacOS was.
When I got the Surface Book, it was because I also need some software that made Linux a difficult choice. I told myself that if I didn’t need that software, I would be using Linux if I could find a good Linux laptop with a trackpad I can stand (I’m rather picky on the trackpad front). Now though, I’m not so sure.
The workflow of being able to spin up different Linux environments really quickly and be able to throw them away has been a wonderful development experience. Windows itself isn’t really in my mind worse than MacOS, it’s just bad in different ways.
At this point, Windows with WSL2 might end up being my primary development environment for a long time. Which, strikes me as the weirdest thing I could say because Windows was basically a giant unknown foreign entity for me. I’ve been a Unix*n developer for over 25 years and the idea of using Windows still seems foreign to me.
I think part of it might be how Apple seems to have thrown away their investment into a platform for creators/hackers in favor of going all in for consumer-oriented platforms. They should’ve done both. They had the money to make their Mac OS the platform that all the best creators would want. They literally could’ve bought most of the major companies creators used, ensured their products were most optimized for Mac OS, made the best hardware that was simultaneously optimize for those companies’ products, and kept the creators coming. They’d have the lead on consumer stuff, creator stuff, and still plenty of money to throw at shareholders.
Instead, the company whose co-founder was great at making billions off of consumers totally wasted their opportunities on the creators that made it what it was. They might still turn it around if they can somehow connect the dots.
At this point I’ve disliked all the Macbook Pro hardware since 2015 and starting with Sierra, I found MacOS to be a constant series of crashes for me. So, they have a lot of ground to make up in my mind. Not that they seem to care.
I don’t even like Apple and I still cannot upvote this enough. The iPod was the beginning of the end of the Apple that was worth loving because it showed that there was far, far more money in making classy, expensive consumer goods than in making high quality tech tools.
Sorry, but the iPod was fantastic. It did it’s job really very well, and it felt well-made and of high quality.
The reason the newer MacBooks suck so bad is because they are not fit for purpose for many of our peers. A laptop that ceases to function when a spec of dust wriggles its way into the keyboard is not fit for purpose.
A well-designed product needs to do very well the thing that it was designed to do. The iPod did that. The newer MacBooks do not.
It was the creators platform for a while. High-end software for pictures, audio, and video was targeted to it. Them building on the momentum they had with that audience would’ve kept money coming in with fewer reasons for people to use Windows. Instead, they were pissing off their own customers with the Windows PC getting better for those same customers all the time.
It doesn’t look like it rebounded much from Microsoft doing the Metro disaster or turning Windows into a surveillance platform, either. Some folks would like an alternative to Windows with as much usability, high-quality apps, and hardware drivers designed for it.
I thought you must’ve been playing devil’s advocate. You’re saying you want the purveyor of an ecosystem to buy up all the players in its ecosystem? I don’t think that’s ever been Apple’s forté, and if they messed up the platform they would’ve definitely messed up a play like this. I think there are other, significantly more prudent ways to invest in your ecosystem.
I’m basically saying they shouldve invested in their creator-oriented platform in all the ways big companies do. That includes acquiring good brands in various segments followed up with ensuring they work great on Mac and integrate well with other apps.
They can otherwise let them do their thing. Apple isn’t good at that. It would’ve helped them, though.
Although Apple might have had dominated the creators market. Maybe they know something the rest of us don’t. The reason why they’re building 6000$ computers and consumer-oriented laptops might be because that’s where the money is right now. https://www.youtube.com/watch?v=KRBIH0CA7ZU
If you have the proper library be it openssl or pcre2 installed then bringing in regex, crypto, or net-ssl will work.
Glob has a transitive dependency on regex do you need to tell the package manager to bring in both.
Carl Quinn has been working on a new package manager that handles transitive dependencies. With it, the situation is better.
But neither of the package managers that exist will help with the c library issue .
Or plan is to deprecate the existing package manager, stable, and replace it with the much more full featured one, corral, that Carl has been working on.
We are still figuring out the system library issue but hope to address that using corral as well.
this seems like an excellent licence, clearly spelling out the intent of the copyright, rather than trying to fashion a one-size-fits-all set of rules. it reminds me of cory doctorow’s point that, intuitively, if some community theatre wanted to dramatise one of his works, they should be able to just do so, but if a major hollywood studio wanted to film it they should require a licence, and it is hard to draft a copyright law that does this properly.
Can this be extrapolated into a ‘BLISS’ principle: ‘Buy License if SaaS’
It can be. The question is not whether someone could do a thing, it’s whether they should do a thing.
And the answer to that question is: Cockroach Labs itself wants to offer CockroachDB as SaaS, and they see it as absolutely necessary that they have the exclusive right to decide whether anyone else can do that and charge money for the privilege. Fair enough, they hold the copyright on the software (presumably) and can relicense it as they wish.
But what happens to Cockroach Labs’ SaaS offering if every other component of the stack they run on adopts the same license and says “free but only if you’re not a for-profit SaaS”? If they have to pay dozens or, more likely, hundreds of separate license fees for the privilege of using all the other open-source components they depend on?
The answer is Cockroach Labs would not be in the SaaS business for very long after that, because they wouldn’t be able to turn a profit in such a world. The categorical imperative catches up to people. And the real result would be everybody forking from the last genuinely open-source version and routing around the damage that way.
But what happens to Cockroach Labs’ SaaS offering if every other component of the stack they run on adopts the same license and says “free but only if you’re not a for-profit SaaS”?
but cockroachdb, as far as i can make out, is not doing this - they’re saying “free, unless you’re a for-profit cockroach-db-as-a-saas”, that is, if what you are selling is a hosted version of cockroachdb itself, rather than some other saas product that happens to use cockroach as a backend.
Right. So assuming that Cockroach Labs offers no services except CockroachDB-as-a-service and a support line, Cockroach Labs would not have to pay for any additional licenses if all dependencies in their software stack switched to CockroachDB’s new license.
I think very few companies would be harmed if this license became prevalent. (I make no statement on the worth of the services of the few companies that would be harmed by such mass relicensing.)
Exactly. I think different kinds of projects end up preferring different kinds of licenses, for good reasons:
core infrastructure — libraries, runtimes, kernels, compilers — permissive and public domain-ish — because “stuff you were going to write anyway”, not written directly for profit, stuff you want to just exist and would love it if someone made a successful fork (because you wouldn’t have to maintain it anymore! — that’s most of my github projects) etc.
end user / GUI / client software — desktop, mobile apps — copyleft — because someone else turning your app into a proprietary one sucks, you want user freedom for the end users
SaaSable / Web Scale™ / serious business oriented server software — distributed DBMSes like this one — these “Buy License if SaaS” licenses — because reasons everyone discussed with the SaaS thing
Of course not everyone will agree with my philosophy here, but I think it’s good and much more productive than “I hate GPL” / “I hate permissive” / “the anti-SaaS stuff is destroying all FOSS ever”. You don’t have to attach yourself personally to a kind of license, you can adopt a philosophy of “different licenses for different kinds of projects”.
core infrastructure — libraries, runtimes, kernels, compilers — permissive and public domain-ish — because “stuff you were going to write anyway”,
I don’t think that’s true given the value that great infrastructure can provide, esp with good ecosystem. The mainframe companies, VMS Inc, Microsoft, and Apple all pull in billions of dollars selling infrastructure. The cloud companies sell customized and managed versions of open infrastructure. The vendors I reference making separation kernels, safety-critical runtimes, and certifying compilers are providing benefits you can’t get with most open code. Moreover, stuff in that last sentence costs more to make both in developer expertise and time.
I think suppliers should keep experimenting with new licenses for selling infrastructure. These new licenses fit that case better than in the past. If not open, then shared source like Sciter has been doing a long time. I’d still like to see shared source plus paying customers allowed to make unsupported forks and extensions whose licenses can’t be revoked so long as they pay. That gets really close to benefits of open source.
Of course there’s still companies selling specialized, big, serious things. But FOSS infrastructure has largely won outside of these niches. Linux is everywhere, even in smart toilets and remote controlled dildos :D Joyent has open sourced their whole cloud stack. Google has open sourced Bazel, Kubernetes, many frontend frameworks… Etc. etc.
shared source plus paying customers allowed to make unsupported forks and extensions whose licenses can’t be revoked so long as they pay
IIRC that’s the Unreal Engine 4 model. It’s.. better than hidden source proprietary I guess.
separation kernels, safety-critical runtimes, and certifying compilers are providing benefits you can’t get with most open code
I’ve heard of some of these things.. but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.
“ But FOSS infrastructure has largely won outside of these niches. “
Free stuff that works well enough is hard to argue with. So, FOSS definitely wins by default in many infrastructure settings.
“but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.”
They’ve (pdf) all been cathedral-style, paid developments by proprietary vendors or academics. A few became commercial products. A few were incidentally open-sourced with one, Genode, having some community activity. seL4 may have some. Most seL4-based developments are done by paid folks that I’ve seen. The data indicates the best results come in security-focused projects when qualified people were paid to work on the projects. The community can do value-adds, shake bugs out, help with packaging/docs, translate, etc. The core design and security usually requires a from core team of specialists, though. That tends to suggest paid models with shared source or a mix that includes F/OSS are best model to incentivize further development.
“and remote controlled dildos :D “
There’s undoubtedly some developer that got laid off from their job shoving Windows CE or Symbian into devices that were once hot who dreamed of building bigger, better, and smarter dildos that showed off what their platforms had. The humiliation that followed wasn’t a smiling matter, Sir. For some, it may have not been the first time either.
cathedral-style, paid developments by proprietary vendors or academics
Yes, the discussion was about licensing, not community vs paid development. For this kind of project, I don’t see how non-FOSS shared source licensing would benefit anyone.
Individuals outside business context could use, inspect, and modify the product for anywhere from cheap to free. Commercial users buy a license that’s anything from cheap to enterprise-priced. The commercial use generates revenues that pay the developers. Project keeps getting focused work by talented people. Folks working on it might also be able to maintain work-life balance. If 40-hr workweek, then they have spare time and energy for other projects (eg F/OSS). If mix of shared-source and F/OSS, a percentage of the funds will go to F/OSS.
I think that covers a large number of users with acceptable tradeoffs. Harder to market than something free. The size of the security and privacy markets makes me think someone would buy it.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free. This is a choice they have the legal right to make, of course, but that doesn’t necessarily make it the right decision.
From a business perspective, it’s a very bad sign. A company suddenly switching from open source to proprietary/“source available” is usually a company where the vultures are already circling. And mostly it indicates a fundamental problem with the business model; changing the license like this doesn’t fix that problem, and in fact can’t fix it. If demand for CockroachDB is significant enough, other people will fork from the last open-source release and keep it going. If demand for it isn’t significant enough, well, they won’t. And either way, Cockroach Labs probably won’t make back what the VCs invested into it.
From a software-ecosystem perspective, it’s more than a bit hypocritical. Lots of people build and distribute permissive-licensed software, and Cockroach Labs has, if not profited (since they may not be profitable) from it, at least saved significant up-front development cost as a result. If what they wanted was a copyleft-style share-and-share-alike, there were licenses available to let them do that (which, from a business perspective, still would not have saved them). But that’s not really what they wanted (and by “they” I mean the people in a position to impose decisions, which does not mean the engineering team or possibly even the executive team). What they seem to have wanted was to be proprietary from the start, and therefore to have absolute control over who was allowed to compete with them and on what terms. There is no open-source or Free Software license available which achieves that goal; the AGPL comes closest, but still doesn’t quite get there.
And there simply may not have been a business model available for CockroachDB that would satisfy their investors, but Cockroach Labs was founded at a time when it already should have been clear – especially to a founding team of ex-Googlers – where the market was heading with respect to managed offerings for this type of software. They could have tried other options, like putting more work into integrating with cloud providers’ marketplaces, but instead they knowingly signed up to get their lunch eaten, and do in fact appear to have gotten their lunch eaten.
You are hinting that Cockroach Labs are trying to act as freeloaders while ignoring the real elephant in the room: SaaS providers.
I’m pointing out the simple fact that Cockroach Labs wants to have the right to build a business on open-source software, but wants to say that other entities shouldn’t have that same right. That’s literally what this comes down to, and literally what their new license tries to say.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free.
That’s an unfair characterization. The code they use is made by people who like giving stuff away for free. If permissive, they’ve already chosen a license that lets commercial software reuse it without giving back any changes. If copyleft under GPL-like license, there’s already bypasses to sharing like SaaS that they’re implicitly allowing by not using a strong license. They’re also doing this in a market where most users of their libraries freeload. They then release the code under that license knowing all this for whatever reasons they have in mind.
And then Cockroach Labs, whose goal is a mix of profit and public benefit, uses some of the code they were given for free. They modify the license to suit their goals. Each party contributing code should be fine with the result because each one is doing exactly what you’d expect with their licenses and incentives. If anything, CockroachDB is going out of their way to be more altruistic than other for-profit parties. They could be locking stuff up more.
They approve of the “take open-source software and build a business on it without financially supporting all the authors in a sustainable way” approach when it’s them doing it with other people’s software. They don’t approve when it’s Amazon doing it with CockroachDB. You can try to spin it, but that’s really what it comes down to.
And they want control over who’s allowed to compete with them and who’s allowed to use their software for what purposes. That’s fundamentally incompatible with their software being open source, and they’ve finally realized that, but it’s a bit late to be suddenly trying to change to proprietary.
I agree it won’t be open source software when they relicense it. I disagree that there’s any spin. I tell people who want to force contributions or money back to put it in their license with a clause blocking relicensing to non-OSS/FOSS. Yet, the OSS people still keep using licenses or contributing to software with such licenses that facilitate exactly what CockroachDB-like companies are doing.
I don’t see how it’s evil or hypocritical for a for-profit company acting in self-interests to use licensed components whose authors choose knowing it facilitates that. It wasn’t the developers only option. There was a ton of freeloading and hoarding of permissively-licensed components before they made the choice. Developers wanting contributions from selfish parties, esp companies, should use licenses that force like AGPL or Parity. The kinds of companies they gripe about mostly avoid that stuff. This building on permissive licensing and relicensing problem has two causes, not one.
Note: There’s also people that don’t care if companies do that since they’re just trying to improve software they and other people use. Just figured I should mention that in case they’re reading.
I don’t see how it’s evil or hypocritical for a for-profit company acting in self-interests to use licensed components whose authors choose knowing it facilitates that.
It’s not “evil”. But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.
Given their intended business model, CockroachDB probably should have been proprietary from the start. Would’ve avoided this specific headache (but probably still wouldn’t have avoided the problem with the business model they chose).
“But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.”
“CockroachDB probably should have been proprietary from the start”
“three years after each release, the license converts to the standard Apache 2.0 license”
Amazon isn’t giving all their stuff away after three years under a permissive, open-source license. What we’re really discussing is a company that will delay open-sourcing code by three years, not just license proprietary software. Every year, they’ll produce more open-source code. It will be three years behind the proprietary, shared-source version everyone can use except for SaaS companies cloning and selling their software. You’re talking like they’re not giving anything back or doing any OSS. They are. It’s just in a way that captures some market value out of it.
In contrast, the people making OSS dependencies usually aren’t doing anything to capture business value out of the code. If anything, they’re not even trying to. They’re directly or indirectly encouraging commercial freeloading with a license that enables it instead of using one that forbids it. So, CockroachDB doesn’t owe them anything or have any incentive to pay. Whereas, CockroachDB’s goal is to make profit on their own work. The goal differences are why there’s no hypocrisy here. It would be different if the component developers were copylefting or charging for CockroachDB’s dependencies with the company not returning code or pirating the components.
Have you heard anyone at Cockroach Labs say this? Wouldn’t they be able to offer their service based on 3 year old versions of every piece of OSS they use? It seems to me this license would work fine transitively, so there’s no hypocrisy involved.
If they have to pay dozens or, more likely, hundreds of separate license fees for the privilege of using all the other open-source components they depend on?
Sounds good to me. They have had millions of dollars of funding, they can easily pay some money to people who deserve it.
Thx @trousers@johnaj for clarification.
I guess, for me this ‘muddied’ waters so to speak.
Say, hypothetically, I have a SaaS that allows my customers to upload logs from IoT devices, and schema (in my DSL) explaining the data, and some SQL-like (but can also be my DSL) queries about their data.
My service is to provide the results of the queries back to them via dashboards/PDFs etc.
The hypothetical SaaS charges for that service (and hopes, in some distant future, to make net profit)
Underneath, I want to use CockroachDB.
When customer provides their data explanation in DSL, I actually translate it into CockroachDB schema, and create materialized and non-materialized views (I do not know if the DB supports this, let’s assume – it does).
I do that so that customer’s queries can be translated to database statements more easily (and run efficiently).
So I have a SaaS service, and allow customers (although indirectly) to create schema specific to their data in my database.
Will I need license?
From what I am reading right now, I will.
This is not good or bad – but I hope, then, Postgres would never adapt BLISS.
May be I am wrong .. so hope to hear what others think.
No. I think anything that is indirect (they are not using the wire protocol or directly issuing queries) is not going to require a license.
That said, I can see how your example is demonstrative of a possible problem – if Amazon created like a graphQL layer in front of it that just sort of translated to and from CockroachDB would that give them safety license wise – and I think it would.
Right, there is ambiguity about the ‘type or class’ of layers that when added, will not require a license vs layers that will require a license.
If I correctly understand the spirit and the intent of their license,
I actually think CockroachDB should protect themselves, and specify that following layers:
a) security + access control layers
b) performance + scalability layers
c) General (not domain specific) query meta language layers
If a SaaS business added on top of their DB only the above layers in essense,
and then sold as SaaS together with CocroachDB – they would need the BLISS license.
Also, at the end of the day, their license may end up being, still, free for some businesses that fall under BLISS – but I think, CockrouchDB team and their investors, want to be in control of that decision…
I will say, this is pony program that uses SDL by directly linking against external c functions! Most SDL hello world examples you’ll see in other languages use a library wrapping the external calls. I think it speaks volumes that in fact the Pony source is both readable and short, especially considering that Pony is a managed language, with support for actors. (In comparison, the C FFI in both Go and Erlang tends to be much harder)
We definitely have a considerably smaller community than Rust at this point. In part, I think that is:
1- Rust has corporate backing and people paid to work on it
2- It’s post 1.0 and most people won’t consider using pre-1.0 stuff
More people contributing to the Pony ecosystem (including libraries) is something we could really use and would happily support as best we can. We’ve had a lot of excited people start to pitch in. Most haven’t contributed to open source projects before and I think for many, the shine rather quickly rubs off. I don’t blame them, maintaining open source software is… well, it’s an “interesting hobby”.
Pony can directly call into C (and therefore into Rust) and generate C headers, which Rust can consume to generate interfaces. The biggest problem for “just” using both from both sides is language semantics outside of the function call interface that the C ABI provides. Also, Rusts ABI is currently private.
I don’t do much system level programming, so I don’t need either, really, so I’m very unlikely to step up.
Both Rust and Pony provide safety guarantees and features way past “safe systems programming” and Rust is definitely used as a “wicked fast Python” in some circles. It’s an interesting space to work in, currently :).
I really like the syntax, and the underlying ideas. I recently speed read through the tutorial, and the most daunting aspect of it (for me) was the reference capabilities part. I hope I can find a side project to play with it some more.
Plus the language is named pony, which makes it instantly great. ;)
I’m not sure what you mean about Pony or Cloud Haskell but I have some answers to why not Erlang.
Static typing Erlang is a different and considerably more difficult problem than making a compatible soundly typed language. Realistically for a typer for Erlang to get adoption it needs to be flexible enough to allow it to be applied incrementally to existing Erlang codebases, which means gradual typing. Gradual typing offers a very different set of guarantees to those that Gleam offer. By default it’s unsafe, which isn’t what I wanted.
There’s also some more human issues such as my not being part of Ericsson, so I would have little to no ability to make use or adapt the existing compile, I would have to attempt to be compatible with their compiler. If we were successful in that the add-on nature means it becomes more of a battle to get it adopted in Erlang projects, something that I would say the officially supported Dialyzer has struggled to do.
Overall I don’t believe it would be an achievable goal for me to add static types to Erlang, but making a new language is very achievable. It also gives us an opportunity to do things differently from Erlang- I love Erlang, but it certainly has some bits that I like less.
I’m not sure what the point of Pony on BEAM would be. Pony’s reference capabilities statically prove that “unsafe things” are safe to do. BEAM wouldn’t allow you to take advantage of that. Simplest example: message passing would result in data copying. This doesn’t happen in the Pony runtime. You’d be better of using one of the languages that runs on BEAM including ones with static typing.
As you might imagine I’m very interested in how Pony types actors. I skimmed the documentation but many aspects such as any supervision trees were unclear to me (likely my fault). Are there any materials on the structuring Pony applications you could recommend? Thank you :)
There are no supervision trees at this time in Pony. How you would go about doing that is an interesting open question.
Because Pony is statically typed and you can’t have null references, the way you send a message to an actor is by holding a “tag” reference to that actor and calling behaviors on it. This has several advantages but, if you were try to do Erlang style “let it crash”, you run into difficulties.
What does it mean to “let it crash”? You can’t invalidate the references that actors hold to this actor. You could perhaps “reset” the actors in some fashion but there’s nothing built into the Pony distribution to do that, nor are the mechanics something anyone has solid ideas about.
I’m really glad you pointed this out as I’ve been planning on transitioning some of the Pony CI over to buildkite and it would have sucked when I found out about this.
Is there a reason to consider this instead of something like 1Password or Bitwarden? I read the FAQ and it seems like it is the same except tied to Firefox.
Dunno about their offerings, but what I personally like, about lockbox is that it’s got super friendly ux (caters for simplicity of use rather than us nerds :-)). I already use Firefox Sync on all my devices but the Android autofill API allows using it for apps too.
Lockbox is also free of charge, free software and encrypts data on the client, not in the cloud.
you can get GCC with about 26 MB worth of downloads. Visual Studio is about 2
orders of magnitude above that. in addition i dont believe the Visual Studio
compiler is open source.
so until that changes i will use and stick with projects that use GCC/Clang.
And I’d suggest starting my writing your own simple operating system kernel. Personally, I moved from there to studying Minix (this was many years ago). Minix was (and I hear still is) great for learning from. It’s different than Linux in that it is a microkernel architecture, that said you will learn a lot from it because it’s easy to read through and understand. Between writing your own and Minix, you’ll be off to a good start.
I still find the source for the various BSDs easier to follow than Linux and would suggest making that your next big move; graduate to a BSD if you will and from there, you could leap to Linux.
One other thing to consider. Picking a less used kernel to start hacking on when you feel comfortable might be a good idea if: you can find folks in that community to mentor you. In the end, the code base matters less than having people who are grateful for your assistance and want to help you learn. In my experience, smaller communities are more likely be ones you can find mentors in. That said, your mileage my vary greatly.
I could take guesses based on number of people who are committers and the development process as to why that is the case, but in the end, it would be speculation. I know I’m not alone in this feeling, but I don’t know if I’m in the majority or minority.
It’s not about Linux per se, but it does relate to how operating systems work and a similar kernel: The Design and Implementation of the 4.4BSD Operating System. I bought it on the recommendation of John Carmack and the depth it goes into is great. Every chapter also has little quizzes without answers, so you can confirm to yourself you know how the described system components should work.
The irony is that he’s now trying to build better tools that use embedded DSLs instead of YAML files but the market is so saturated with YAML that I don’t think the new tools he’s working on have a chance of gaining traction and that seems to be the major part of angst in that thread.
One of the analogies I like about the software ecosystem is yeast drowning in the byproducts of their own metabolic processes after converting sugar into alcohol. Computation is a magical substrate but we keep squandering the magic. The irony is that Michael initially sqauandered the magic and in the new and less magical regime his new tools don’t have a home. He contributed to the code-less mess he’s decrying because Ansible is one of the buggiest and slowest pieces of infrastructure management tools I’ve ever used.
I suspect like all hype cycles people will figure it out eventually because ant used to be a thing and now it is mostly accepted that XML for a build system is a bad idea. Maybe eventually people will figure out that infrastructure as YAML is not sustainable.
Thanks for bringing Pulumi to my radar, I hadn’t heard of it earlier. It seems quite close to what I’m currently trying to master, Terraform. So I ended up here: https://pulumi.io/reference/vs/terraform.html – where they say
Terraform, by default, requires that you manage concurrency and state manually, by way of its “state files.” Pulumi, in contrast, uses the free app.pulumi.com service to eliminate these concerns. This makes getting started with Pulumi, and operationalizing it in a team setting, much easier.
Which to me seemed rather dishonest. Terraform’s way seems much more flexible and doesn’t tie me to Hashicorp if I don’t want that. Pulumi seems like a modern SAAS money vacuum: https://www.pulumi.com/pricing/
The positive side, of course, is that doing many programmatic-like things in Terraform HCL is quite painful, like all non-turing programming tends to be when you stray from the path the language designers built for you … Pulumi handles that obviously much better.
In a burndown chart, I want to be able to run simulations as well. “What happens to this project if X work falls behind. What happens if Bob gets sick?”
Is anything in software development predictable enough to make this kind of analysis useful?
No, this is basically a management wish-fulfillment fantasy.
Seeing that we thought something would take 8 hours of work but we spent 24 is incredibly valuable. We can revisit our assumptions we made when estimating see where we got it wrong. Then, we can try and account for it next time. Yes, estimating is hard but its also a skill you can get better at if you work at it and have support for proper tools.
Likewise, I have heard this asserted by every manager I’ve ever worked with and for. No evidence has ever been presented, nor have estimates actually improved over time. (The usual anti-pattern is: estimates get worse over time because every time an estimate turns out to be low, a fix is proposed–“what if we do all-team estimates? what if we update estimates more frequently?”–which inevitably means spending more time on estimates, which means spending less time on development, which means we get slower and slower.)
I personally used to be quite bad at estimating. I’ve worked at it, I’ve gotten much better about estimating. There are things you can do to get much better at it. None of the things you’ve mentioned are ones I think would help. I plan on writing a post about the things I’ve learned and taught others that have helped make estimates more accurate.
Two things I would recommend (and will be primary topics of said blog post).
Estimates slipping is usually about not accurately accounting for risk. Waltzing with Bears is a great book on dealing with risk management. The ideas in it might be overkill for many folks but the process of thinking about how you should account for risk and establishing your own practices is invaluable. The book is great even if you only use it as “that seems overblown, what if I…”.
The second is to record your estimates and why you made them. What did you know at the time that you made your estimate. Then, when your estimate is wrong, examine why. What didn’t you account for? When I first started doing this, I realized that most of my estimates were wrong because I didn’t bother to explore the problem enough and that I was being tripped up by not accounting for known knowns. Eventually I got better at that and started getting tripped up by known unknowns (that’s risk). I’ve since adopted some techniques for fleshing our risks when I am estimating and then front loading working on risks. If you think something might take a week or it might take a month, work on that first. Dig in to understand the problem so you can better estimate it. Maybe the problem is way harder than you imagine and you shouldn’t be doing the project at all. This isn’t a new concept but its one that is rarely used. In agile methodologies, its usually called a “spike”.
I’ve worked on projects that spanned over the course of months and we spent a couple weeks on estimation and planning. A big part of that time digging in, understanding the problem better, discussing it and coming up with what we needed to explore more to really understand the project so we could course correct as we went along.
Your customers might not be happy with your team constantly running late. Your pre-revenue startup might have a hard time raising investment. Whatever. There are external reasons for why a professional developer must be reliable in his or her estimates to actually get things out the door.
I’ve been changing my opinion on this back and forth. Especially in a pïss-poor startup, where the the biz guys wanted us to skip unit testing to achieve results faster, refusing to estimate was a fuck you. The code base got convoluted, but also dealing with how they represented things was frustrating.
I feel that in those cases the problem runs deeper in how geeks are supposed to be managed. Hell, it could be that estimation starts eating up time because the managers drove the geeks into protesting, which is - of course - as unprofessional as delivering late. Still you need to sort out your org for smooth ops before taking care of estimates.
Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.
The fact that customers want something, or that it is necessary to the business’s success, do not cause it to become possible. I’m not disputing the desirability of accurate estimates. I’m disputing the idea that they are possible. I have not seen any team or technique generate such estimates reliably, over time, in various circumstances. (Of course, like any gambler with a “system,” sometimes people making estimates get lucky and they turn out to be right by coincidence.) I have heard many managers claim to have a system for reliable estimates which worked in some previous job; none was able to replicate that success on the teams I observed directly.
(It’s not just software, either. Many people point to the building trades as an example of successful estimation and scheduling. In my experience maintaining and restoring an old house and the experiences of friends and acquaintances who’ve undertaken more ambitious restorations, this is more wishful thinking. It’s common for estimates by restoration contractors on larger jobs to be off by months or years, and vast amounts of money. If so mature an industry can’t manage reliable scheduling, what hope is there for us?)
Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.
I’d argue that is exactly what much software development is like (except there is no substitute for the customer).
Maybe I’d like to be more optimistic about learning to estimate better ;) But for sure @SeanTAllen touched on a lot of pertinent points. Is it Alice or Bob who gets the task? How well is the problem space, the code, known? And so on.
It’s hard as balls, and you’re not wrong with your gambler analogy, but not all systems for getting things right occasionally are equally unlikely to succeed. Renovators also learn what to look out for and how long those issues tend to take, as well as the interactions. Probabilities are usually ok by customers if that’s all you got.
In my car analogy, the point kinda was that we’re screwed because we can’t give out substitutes. We can deliver temporary hacks, although nothing is as permanent as a temporary hack.
A lot of activities in software development try to improve predictability. For example: following style guidelines, continuous integration, unit testing, etc. All of these have a cost and slow developers down. The upside of course is to reduce bugs which will slow you down much more later. Or maybe not. The risk is generally too high, so we generally prefer the predictability of a slow incremental process.
I have a feeling that Demings thought about that when he talked about “variation”, but I need to read more from the father of Lean and grandfather of Agile to understand that. Currently, I believe that I don’t assign quite the correct meaning to the words I read from him.
“When pursuing a vertical scaling strategy, you will eventually run up against limits. You won’t be able to add more memory, add more disk, add more “something.” When that day comes, you’ll need to find a way to scale your application horizontally to address the problem.”
I should note the limit last I checked was SGI UV’s for data-intensive apps (eg SAP Hana) having 64 sockets with multicore Xeons, 64TB RAM, and 500ns max latency all-to-all communication. I swore one had 256 sockets but maybe misremembering. Most NUMA’s also have high-availability features (eg “RAS”). So, if it’s one application per server (i.e. just DB), many businesses might never run into a limit on these things. The main limit I saw studying NUMA machines was price: scaling up cost a fortune compared to scaling out. One can get stuff in low-to-mid, five digits now that previously cost six to seven. Future-proofing scale up by starting with SGI-style servers has to be more expensive, though, than scale-out start even if scale-out starts on a beefy machine.
You really should modify the article to bring pricing up. The high price of NUMA machines was literally the reason for inventing Beowful clusters which pushed a lot of the “spread it out on many machines” philosophy towards mainstream. The early companies selling them always showed the price of eg a 64-256 core machine by Sun/SGI/Cray vs cluster of 2-4 core boxes. First was price of mini-mansion (or castle if clustered NUMA’s) with second ranging from new car to middle-class house. HA clustering goes back further with VMS, NonStop, and mainframe stuff. I’m not sure if cost pushed horizontal scaling for fault-tolerance to get away from them or if folks were just building on popular ecosystems. Probably a mix but I got no data.
“The number of replicas, also known as “the replication factor,” allows us to survive the loss of some members of the system (usually referred to as a “cluster”). “
I’ll add that each could experience the same failure, esp if we’re talking attacks. Happened to me in a triple, modular redundancy setup with a single component faulty. On top of replication, I push hardware/software diversity much as one’s resources allow. CPU’s built on different tools/nodes. Different mobo’s and UPS’s. Maybe optical connections if worried about electrical stuff. Different OS’s. Different libraries if they perform identical function. Different compilers. And so on. The thing that’s the same on each node is one app you’re wanting to work. Even it might be several written by different people with cluster having a mix of them. The one thing that has to be shared is the protocol for starting it all up, syncing the state, and recovering from problems. Critical layers like that should get the strongest verification the team can afford with SQLite and FoundationDB being the exemplars in that area.
Then, it’s really replicated in a fault-isolating way. It’s also got a lot of extra failure modes one has to test for. Good news is several companies and/or volunteers can chip in each working on one of the 3+ hardware/software systems. Split the cost up. :)
I don’t think we need to have threads about every minor bugfix release of Pony (or indeed, any software).
I think we have a voting system for a reason. If people didn’t care, this would have dropped off the front page, but since this is currently the #2 story, clearly some people, myself included, enjoy this type of content. If you don’t want to see a story, then hide it.
You can apply “hide it if you don’t like it” to any story; it’s a bit of a discussion-stopper. One of the values in Lobsters is that it’s got interesting stories with a fairly good signal/noise ratio, and IMHO this isn’t very good signal/noise considering it’s a pretty minor release.
It has a pretty interesting “coming up” section.
I like these posts and would like to see them.
I appreciate your position though and wonder if better filtering (e.g. “pony + release”) would help.
The end game being a special release tag for each esoteric language and personal project people post so often that others regularly call it out?
People here work on really interesting things, major updates to those are often interesting and elicit good comments. Pony is interesting, and I actually think the Pony people are hurting their cause when instead of an on-topic discussion the responses to their posts end up being “please stop spamming this” or “can we have a special filter for this?”
I’m not “Pony people”? Like, far from it? I’m just very interested in everything proglang.
But the discussion about granularity comes up again and again and I decided to take you example and see if we can make the platform better. I’m confused by your strong reaction, so I’ll let that rest here.
And no, the end game would not be that everything gets a tag, the end game would be that everything that has notable coverage here can get a tag here, so that people can filter.
As a (primary) developer of Monte, another language in the same niche as Pony, I get two boons out of this.
The first boon is the one that I reap when I actually click the link. I get to learn what, if any, interesting things have happened in the Pony world. Today, nothing interesting, but that’s okay.
The other boon is that I get a contrast, simply by not posting regular updates about Monte and giving it a version number, with Pony. This contrast helps widen the niche for capability-aware programming languages, and gives nuance and possibility to the ways in which both Monte and Pony could be used.
I do wonder about the degree to which this posting is a commercial advertisement, but I don’t feel that it is especially advertising.
Today I learned about Monte. Nice. I’m glad you posted it.
The Pony project is primarily using:
We still have a few older TravisCI, Appveyor, and CircleCI tasks around, but those are all being deprecated.
GitHub actions are very nice for us as it is much easier to automate portions of our workflow than it was with “external” CI services.
CirrusCI is awesome because… we can test Linux, Windows, macOS, and FreeBSD. But most importantly, we have CI jobs that build LLVM from scratch and CirrusCI allows use to get 8 CPU machines for the job. The only CI services we tried that have been able to handle the jobs that build LLVM are CircleCI and CirrusCI.
I switched a project from a self-hosted Buildbot to CirrusCI and I’ve also found it to be great. It’s so useful that they can spin up builders on demand and do so on your own GCP account.
Aside from a number of settings that I’ve changed. I use the following extensions.
Some settings changes:
Theme:
Language:
File type support:
Other:
I started as a programmer in the early 90s. During that time, the only Windows based computer I ever owned was a dedicated one that ran Cubase for recording. Despite that, about 3 weeks ago I switched off of MacOS after 12 years and got a Surface Book 2 that I’ve been using with WSL2. There are some weird and hinky things with Windows 10 that bother me but all in all, WSL2 is a better development experience than MacOS was.
When I got the Surface Book, it was because I also need some software that made Linux a difficult choice. I told myself that if I didn’t need that software, I would be using Linux if I could find a good Linux laptop with a trackpad I can stand (I’m rather picky on the trackpad front). Now though, I’m not so sure.
The workflow of being able to spin up different Linux environments really quickly and be able to throw them away has been a wonderful development experience. Windows itself isn’t really in my mind worse than MacOS, it’s just bad in different ways.
At this point, Windows with WSL2 might end up being my primary development environment for a long time. Which, strikes me as the weirdest thing I could say because Windows was basically a giant unknown foreign entity for me. I’ve been a Unix*n developer for over 25 years and the idea of using Windows still seems foreign to me.
¯_(ツ)_/¯
I think part of it might be how Apple seems to have thrown away their investment into a platform for creators/hackers in favor of going all in for consumer-oriented platforms. They should’ve done both. They had the money to make their Mac OS the platform that all the best creators would want. They literally could’ve bought most of the major companies creators used, ensured their products were most optimized for Mac OS, made the best hardware that was simultaneously optimize for those companies’ products, and kept the creators coming. They’d have the lead on consumer stuff, creator stuff, and still plenty of money to throw at shareholders.
Instead, the company whose co-founder was great at making billions off of consumers totally wasted their opportunities on the creators that made it what it was. They might still turn it around if they can somehow connect the dots.
Perhaps.
At this point I’ve disliked all the Macbook Pro hardware since 2015 and starting with Sierra, I found MacOS to be a constant series of crashes for me. So, they have a lot of ground to make up in my mind. Not that they seem to care.
I don’t even like Apple and I still cannot upvote this enough. The iPod was the beginning of the end of the Apple that was worth loving because it showed that there was far, far more money in making classy, expensive consumer goods than in making high quality tech tools.
Sorry, but the iPod was fantastic. It did it’s job really very well, and it felt well-made and of high quality.
The reason the newer MacBooks suck so bad is because they are not fit for purpose for many of our peers. A laptop that ceases to function when a spec of dust wriggles its way into the keyboard is not fit for purpose.
A well-designed product needs to do very well the thing that it was designed to do. The iPod did that. The newer MacBooks do not.
Oh god please no. Why and how would this have been a good thing!?
It was the creators platform for a while. High-end software for pictures, audio, and video was targeted to it. Them building on the momentum they had with that audience would’ve kept money coming in with fewer reasons for people to use Windows. Instead, they were pissing off their own customers with the Windows PC getting better for those same customers all the time.
It doesn’t look like it rebounded much from Microsoft doing the Metro disaster or turning Windows into a surveillance platform, either. Some folks would like an alternative to Windows with as much usability, high-quality apps, and hardware drivers designed for it.
I thought you must’ve been playing devil’s advocate. You’re saying you want the purveyor of an ecosystem to buy up all the players in its ecosystem? I don’t think that’s ever been Apple’s forté, and if they messed up the platform they would’ve definitely messed up a play like this. I think there are other, significantly more prudent ways to invest in your ecosystem.
I’m basically saying they shouldve invested in their creator-oriented platform in all the ways big companies do. That includes acquiring good brands in various segments followed up with ensuring they work great on Mac and integrate well with other apps.
They can otherwise let them do their thing. Apple isn’t good at that. It would’ve helped them, though.
Although Apple might have had dominated the creators market. Maybe they know something the rest of us don’t. The reason why they’re building 6000$ computers and consumer-oriented laptops might be because that’s where the money is right now. https://www.youtube.com/watch?v=KRBIH0CA7ZU
For reference (since I was interested and looked it up), wikipedia says the SurfaceBook 2 has:
https://shru.gg/r might be of service
What’s the new workflow to use regex, glob, crypto packages? Do these “just work” if installed from source with the Pony package manager?
Not quite. Not yet anyway.
If you have the proper library be it openssl or pcre2 installed then bringing in regex, crypto, or net-ssl will work.
Glob has a transitive dependency on regex do you need to tell the package manager to bring in both.
Carl Quinn has been working on a new package manager that handles transitive dependencies. With it, the situation is better.
But neither of the package managers that exist will help with the c library issue .
Or plan is to deprecate the existing package manager, stable, and replace it with the much more full featured one, corral, that Carl has been working on.
We are still figuring out the system library issue but hope to address that using corral as well.
Interested in hearing other views. But I think what they are doing is reasonable.
Can this be extrapolated into a ‘BLISS’ principle: ‘Buy License if SaaS’ (just came up with abbreviation :-) )
“.. The one and only thing that you cannot do is offer a commercial version of CockroachDB as a service without buying a license. ..”
They should probably provide some examples of what they consider a CockroachDB service, vs a service that’s using CockroachDB underneath.
agreed. copying my comment over from hn:
this seems like an excellent licence, clearly spelling out the intent of the copyright, rather than trying to fashion a one-size-fits-all set of rules. it reminds me of cory doctorow’s point that, intuitively, if some community theatre wanted to dramatise one of his works, they should be able to just do so, but if a major hollywood studio wanted to film it they should require a licence, and it is hard to draft a copyright law that does this properly.
It can be. The question is not whether someone could do a thing, it’s whether they should do a thing.
And the answer to that question is: Cockroach Labs itself wants to offer CockroachDB as SaaS, and they see it as absolutely necessary that they have the exclusive right to decide whether anyone else can do that and charge money for the privilege. Fair enough, they hold the copyright on the software (presumably) and can relicense it as they wish.
But what happens to Cockroach Labs’ SaaS offering if every other component of the stack they run on adopts the same license and says “free but only if you’re not a for-profit SaaS”? If they have to pay dozens or, more likely, hundreds of separate license fees for the privilege of using all the other open-source components they depend on?
The answer is Cockroach Labs would not be in the SaaS business for very long after that, because they wouldn’t be able to turn a profit in such a world. The categorical imperative catches up to people. And the real result would be everybody forking from the last genuinely open-source version and routing around the damage that way.
but cockroachdb, as far as i can make out, is not doing this - they’re saying “free, unless you’re a for-profit cockroach-db-as-a-saas”, that is, if what you are selling is a hosted version of cockroachdb itself, rather than some other saas product that happens to use cockroach as a backend.
Right. So assuming that Cockroach Labs offers no services except CockroachDB-as-a-service and a support line, Cockroach Labs would not have to pay for any additional licenses if all dependencies in their software stack switched to CockroachDB’s new license.
I think very few companies would be harmed if this license became prevalent. (I make no statement on the worth of the services of the few companies that would be harmed by such mass relicensing.)
But most of the deps of CockroachDB aren’t created by corporations who need to monetize them directly.
Exactly. I think different kinds of projects end up preferring different kinds of licenses, for good reasons:
Of course not everyone will agree with my philosophy here, but I think it’s good and much more productive than “I hate GPL” / “I hate permissive” / “the anti-SaaS stuff is destroying all FOSS ever”. You don’t have to attach yourself personally to a kind of license, you can adopt a philosophy of “different licenses for different kinds of projects”.
I don’t think that’s true given the value that great infrastructure can provide, esp with good ecosystem. The mainframe companies, VMS Inc, Microsoft, and Apple all pull in billions of dollars selling infrastructure. The cloud companies sell customized and managed versions of open infrastructure. The vendors I reference making separation kernels, safety-critical runtimes, and certifying compilers are providing benefits you can’t get with most open code. Moreover, stuff in that last sentence costs more to make both in developer expertise and time.
I think suppliers should keep experimenting with new licenses for selling infrastructure. These new licenses fit that case better than in the past. If not open, then shared source like Sciter has been doing a long time. I’d still like to see shared source plus paying customers allowed to make unsupported forks and extensions whose licenses can’t be revoked so long as they pay. That gets really close to benefits of open source.
Of course there’s still companies selling specialized, big, serious things. But FOSS infrastructure has largely won outside of these niches. Linux is everywhere, even in smart toilets and remote controlled dildos :D Joyent has open sourced their whole cloud stack. Google has open sourced Bazel, Kubernetes, many frontend frameworks… Etc. etc.
IIRC that’s the Unreal Engine 4 model. It’s.. better than hidden source proprietary I guess.
I’ve heard of some of these things.. but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.
“ But FOSS infrastructure has largely won outside of these niches. “
Free stuff that works well enough is hard to argue with. So, FOSS definitely wins by default in many infrastructure settings.
“but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.”
They’ve (pdf) all been cathedral-style, paid developments by proprietary vendors or academics. A few became commercial products. A few were incidentally open-sourced with one, Genode, having some community activity. seL4 may have some. Most seL4-based developments are done by paid folks that I’ve seen. The data indicates the best results come in security-focused projects when qualified people were paid to work on the projects. The community can do value-adds, shake bugs out, help with packaging/docs, translate, etc. The core design and security usually requires a from core team of specialists, though. That tends to suggest paid models with shared source or a mix that includes F/OSS are best model to incentivize further development.
“and remote controlled dildos :D “
There’s undoubtedly some developer that got laid off from their job shoving Windows CE or Symbian into devices that were once hot who dreamed of building bigger, better, and smarter dildos that showed off what their platforms had. The humiliation that followed wasn’t a smiling matter, Sir. For some, it may have not been the first time either.
Yes, the discussion was about licensing, not community vs paid development. For this kind of project, I don’t see how non-FOSS shared source licensing would benefit anyone.
Individuals outside business context could use, inspect, and modify the product for anywhere from cheap to free. Commercial users buy a license that’s anything from cheap to enterprise-priced. The commercial use generates revenues that pay the developers. Project keeps getting focused work by talented people. Folks working on it might also be able to maintain work-life balance. If 40-hr workweek, then they have spare time and energy for other projects (eg F/OSS). If mix of shared-source and F/OSS, a percentage of the funds will go to F/OSS.
I think that covers a large number of users with acceptable tradeoffs. Harder to market than something free. The size of the security and privacy markets makes me think someone would buy it.
They aren’t today.
But yesterday, CockroachDB was open-source software.
Yeah people love free stuff and not paying for it.
Well, most of the free stuff I have access to is reasonably priced.
Ok, I meant to say not paying what it is worth (draining the producers).
Yes, people love getting things for free.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free. This is a choice they have the legal right to make, of course, but that doesn’t necessarily make it the right decision.
From a business perspective, it’s a very bad sign. A company suddenly switching from open source to proprietary/“source available” is usually a company where the vultures are already circling. And mostly it indicates a fundamental problem with the business model; changing the license like this doesn’t fix that problem, and in fact can’t fix it. If demand for CockroachDB is significant enough, other people will fork from the last open-source release and keep it going. If demand for it isn’t significant enough, well, they won’t. And either way, Cockroach Labs probably won’t make back what the VCs invested into it.
From a software-ecosystem perspective, it’s more than a bit hypocritical. Lots of people build and distribute permissive-licensed software, and Cockroach Labs has, if not profited (since they may not be profitable) from it, at least saved significant up-front development cost as a result. If what they wanted was a copyleft-style share-and-share-alike, there were licenses available to let them do that (which, from a business perspective, still would not have saved them). But that’s not really what they wanted (and by “they” I mean the people in a position to impose decisions, which does not mean the engineering team or possibly even the executive team). What they seem to have wanted was to be proprietary from the start, and therefore to have absolute control over who was allowed to compete with them and on what terms. There is no open-source or Free Software license available which achieves that goal; the AGPL comes closest, but still doesn’t quite get there.
And there simply may not have been a business model available for CockroachDB that would satisfy their investors, but Cockroach Labs was founded at a time when it already should have been clear – especially to a founding team of ex-Googlers – where the market was heading with respect to managed offerings for this type of software. They could have tried other options, like putting more work into integrating with cloud providers’ marketplaces, but instead they knowingly signed up to get their lunch eaten, and do in fact appear to have gotten their lunch eaten.
You are hinting that Cockroach Labs are trying to act as freeloaders while ignoring the real elephant in the room: SaaS providers.
I’m pointing out the simple fact that Cockroach Labs wants to have the right to build a business on open-source software, but wants to say that other entities shouldn’t have that same right. That’s literally what this comes down to, and literally what their new license tries to say.
That’s an unfair characterization. The code they use is made by people who like giving stuff away for free. If permissive, they’ve already chosen a license that lets commercial software reuse it without giving back any changes. If copyleft under GPL-like license, there’s already bypasses to sharing like SaaS that they’re implicitly allowing by not using a strong license. They’re also doing this in a market where most users of their libraries freeload. They then release the code under that license knowing all this for whatever reasons they have in mind.
And then Cockroach Labs, whose goal is a mix of profit and public benefit, uses some of the code they were given for free. They modify the license to suit their goals. Each party contributing code should be fine with the result because each one is doing exactly what you’d expect with their licenses and incentives. If anything, CockroachDB is going out of their way to be more altruistic than other for-profit parties. They could be locking stuff up more.
They approve of the “take open-source software and build a business on it without financially supporting all the authors in a sustainable way” approach when it’s them doing it with other people’s software. They don’t approve when it’s Amazon doing it with CockroachDB. You can try to spin it, but that’s really what it comes down to.
And they want control over who’s allowed to compete with them and who’s allowed to use their software for what purposes. That’s fundamentally incompatible with their software being open source, and they’ve finally realized that, but it’s a bit late to be suddenly trying to change to proprietary.
I agree it won’t be open source software when they relicense it. I disagree that there’s any spin. I tell people who want to force contributions or money back to put it in their license with a clause blocking relicensing to non-OSS/FOSS. Yet, the OSS people still keep using licenses or contributing to software with such licenses that facilitate exactly what CockroachDB-like companies are doing.
I don’t see how it’s evil or hypocritical for a for-profit company acting in self-interests to use licensed components whose authors choose knowing it facilitates that. It wasn’t the developers only option. There was a ton of freeloading and hoarding of permissively-licensed components before they made the choice. Developers wanting contributions from selfish parties, esp companies, should use licenses that force like AGPL or Parity. The kinds of companies they gripe about mostly avoid that stuff. This building on permissive licensing and relicensing problem has two causes, not one.
Note: There’s also people that don’t care if companies do that since they’re just trying to improve software they and other people use. Just figured I should mention that in case they’re reading.
It’s not “evil”. But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.
Given their intended business model, CockroachDB probably should have been proprietary from the start. Would’ve avoided this specific headache (but probably still wouldn’t have avoided the problem with the business model they chose).
“But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.” “CockroachDB probably should have been proprietary from the start”
“three years after each release, the license converts to the standard Apache 2.0 license”
Amazon isn’t giving all their stuff away after three years under a permissive, open-source license. What we’re really discussing is a company that will delay open-sourcing code by three years, not just license proprietary software. Every year, they’ll produce more open-source code. It will be three years behind the proprietary, shared-source version everyone can use except for SaaS companies cloning and selling their software. You’re talking like they’re not giving anything back or doing any OSS. They are. It’s just in a way that captures some market value out of it.
In contrast, the people making OSS dependencies usually aren’t doing anything to capture business value out of the code. If anything, they’re not even trying to. They’re directly or indirectly encouraging commercial freeloading with a license that enables it instead of using one that forbids it. So, CockroachDB doesn’t owe them anything or have any incentive to pay. Whereas, CockroachDB’s goal is to make profit on their own work. The goal differences are why there’s no hypocrisy here. It would be different if the component developers were copylefting or charging for CockroachDB’s dependencies with the company not returning code or pirating the components.
Have you heard anyone at Cockroach Labs say this? Wouldn’t they be able to offer their service based on 3 year old versions of every piece of OSS they use? It seems to me this license would work fine transitively, so there’s no hypocrisy involved.
Sounds good to me. They have had millions of dollars of funding, they can easily pay some money to people who deserve it.
Or we’ll get something like ASCAP, but for software instead of music.
As a long time ASCAP member, I hope we could do better.
I believe I read somewhere that they considered the user having the ability to freely modify the schema as being “as a service”
Edit: found it
The user of a “CockroachDB as a Service” company, that is (not just a user of CockroachDB in general)
Thx @trousers @johnaj for clarification. I guess, for me this ‘muddied’ waters so to speak.
Say, hypothetically, I have a SaaS that allows my customers to upload logs from IoT devices, and schema (in my DSL) explaining the data, and some SQL-like (but can also be my DSL) queries about their data.
My service is to provide the results of the queries back to them via dashboards/PDFs etc. The hypothetical SaaS charges for that service (and hopes, in some distant future, to make net profit)
Underneath, I want to use CockroachDB.
When customer provides their data explanation in DSL, I actually translate it into CockroachDB schema, and create materialized and non-materialized views (I do not know if the DB supports this, let’s assume – it does). I do that so that customer’s queries can be translated to database statements more easily (and run efficiently).
So I have a SaaS service, and allow customers (although indirectly) to create schema specific to their data in my database.
Will I need license?
From what I am reading right now, I will.
This is not good or bad – but I hope, then, Postgres would never adapt BLISS.
May be I am wrong .. so hope to hear what others think.
No. I think anything that is indirect (they are not using the wire protocol or directly issuing queries) is not going to require a license.
That said, I can see how your example is demonstrative of a possible problem – if Amazon created like a graphQL layer in front of it that just sort of translated to and from CockroachDB would that give them safety license wise – and I think it would.
Right, there is ambiguity about the ‘type or class’ of layers that when added, will not require a license vs layers that will require a license.
If I correctly understand the spirit and the intent of their license, I actually think CockroachDB should protect themselves, and specify that following layers:
a) security + access control layers
b) performance + scalability layers
c) General (not domain specific) query meta language layers
d) Deployment layers (eg ansible roles on top)
e) Hardware layer underneath (eg optimized FPGA/GPUs)
If a SaaS business added on top of their DB only the above layers in essense, and then sold as SaaS together with CocroachDB – they would need the BLISS license.
Also, at the end of the day, their license may end up being, still, free for some businesses that fall under BLISS – but I think, CockrouchDB team and their investors, want to be in control of that decision…
Right. Good clarification.
I think Pony a bit more cumbersome to use than many other languages, at least for a simple “Hello, World!” example in SDL2, but it feels surprisingly solid. https://github.com/xyproto/sdl2-examples/blob/master/pony/main.pony
I will say, this is pony program that uses SDL by directly linking against external c functions! Most SDL hello world examples you’ll see in other languages use a library wrapping the external calls. I think it speaks volumes that in fact the Pony source is both readable and short, especially considering that Pony is a managed language, with support for actors. (In comparison, the C FFI in both Go and Erlang tends to be much harder)
It uses SDL2 directly only because no SDL2 library were available for Pony at the time (I’m not sure if there is one available now).
I just did some exercises in Pony and Rust and I definitely found Pony the more elegant and easy language; but with much worse library support
We definitely have a considerably smaller community than Rust at this point. In part, I think that is:
1- Rust has corporate backing and people paid to work on it 2- It’s post 1.0 and most people won’t consider using pre-1.0 stuff
More people contributing to the Pony ecosystem (including libraries) is something we could really use and would happily support as best we can. We’ve had a lot of excited people start to pitch in. Most haven’t contributed to open source projects before and I think for many, the shine rather quickly rubs off. I don’t blame them, maintaining open source software is… well, it’s an “interesting hobby”.
Absolutely agree. Even contributing to a few projects, I can see that I wouldn’t want to be a maintainer without being paid or “it” being my big idea.
I don’t do much system level programming, so I don’t need either, really, so I’m very unlikely to step up.
A bridge to rust might help though?
Pony can directly call into C (and therefore into Rust) and generate C headers, which Rust can consume to generate interfaces. The biggest problem for “just” using both from both sides is language semantics outside of the function call interface that the C ABI provides. Also, Rusts ABI is currently private.
Both Rust and Pony provide safety guarantees and features way past “safe systems programming” and Rust is definitely used as a “wicked fast Python” in some circles. It’s an interesting space to work in, currently :).
I really like the syntax, and the underlying ideas. I recently speed read through the tutorial, and the most daunting aspect of it (for me) was the reference capabilities part. I hope I can find a side project to play with it some more.
Plus the language is named pony, which makes it instantly great. ;)
How big is your production environment? Is it realistic and affordable to run a regularly updated “exact copy”?
The big question: why not port something else like pony or cloud haskell? Or why not add static types to something like erlang itself?
I’m not sure what you mean about Pony or Cloud Haskell but I have some answers to why not Erlang.
Static typing Erlang is a different and considerably more difficult problem than making a compatible soundly typed language. Realistically for a typer for Erlang to get adoption it needs to be flexible enough to allow it to be applied incrementally to existing Erlang codebases, which means gradual typing. Gradual typing offers a very different set of guarantees to those that Gleam offer. By default it’s unsafe, which isn’t what I wanted.
There’s also some more human issues such as my not being part of Ericsson, so I would have little to no ability to make use or adapt the existing compile, I would have to attempt to be compatible with their compiler. If we were successful in that the add-on nature means it becomes more of a battle to get it adopted in Erlang projects, something that I would say the officially supported Dialyzer has struggled to do.
Overall I don’t believe it would be an achievable goal for me to add static types to Erlang, but making a new language is very achievable. It also gives us an opportunity to do things differently from Erlang- I love Erlang, but it certainly has some bits that I like less.
I’m not sure what the point of Pony on BEAM would be. Pony’s reference capabilities statically prove that “unsafe things” are safe to do. BEAM wouldn’t allow you to take advantage of that. Simplest example: message passing would result in data copying. This doesn’t happen in the Pony runtime. You’d be better of using one of the languages that runs on BEAM including ones with static typing.
Thanks for Pony and thanks for the reply!
As you might imagine I’m very interested in how Pony types actors. I skimmed the documentation but many aspects such as any supervision trees were unclear to me (likely my fault). Are there any materials on the structuring Pony applications you could recommend? Thank you :)
There are no supervision trees at this time in Pony. How you would go about doing that is an interesting open question.
Because Pony is statically typed and you can’t have null references, the way you send a message to an actor is by holding a “tag” reference to that actor and calling behaviors on it. This has several advantages but, if you were try to do Erlang style “let it crash”, you run into difficulties.
What does it mean to “let it crash”? You can’t invalidate the references that actors hold to this actor. You could perhaps “reset” the actors in some fashion but there’s nothing built into the Pony distribution to do that, nor are the mechanics something anyone has solid ideas about.
There aren’t a lot of materials on structuring Pony applications I can recommend at this time. I’d suggest stopping by our Zulip and starting up a conversation in the beginner help stream: https://ponylang.zulipchat.com/#narrow/stream/189985-beginner-help
Thank you
AFAIK there’s still the issue with read-only access for public builds which I’d regard as a blocker for public project usage https://github.com/buildkite/feedback/issues/137
Found https://buildkite.com/changelog/46-public-build-pages-for-open-source.
I’m really glad you pointed this out as I’ve been planning on transitioning some of the Pony CI over to buildkite and it would have sucked when I found out about this.
Apparently they’ve now fixed this, as per @j605’s link below, and the ticket has now been closed out as I’ve pointed this out to upstream.
Is there a reason to consider this instead of something like 1Password or Bitwarden? I read the FAQ and it seems like it is the same except tied to Firefox.
Dunno about their offerings, but what I personally like, about lockbox is that it’s got super friendly ux (caters for simplicity of use rather than us nerds :-)). I already use Firefox Sync on all my devices but the Android autofill API allows using it for apps too.
Lockbox is also free of charge, free software and encrypts data on the client, not in the cloud.
If I follow correctly, you are saying its encrypted at the client and then sync’d to the cloud. Correct?
Yeah
Those are proprietary, and I would assume the Lockbox thing isn’t?1Password is proprietary, bitwarden isn’t. oops.
Bitwarden is open source:
https://github.com/bitwarden
Nope.
Well, ok, one of them is then.
Have to say, my experience of Bitwarden has been nothing but positive. Much prefer to alternatives like lastpass
Us was really awesome. Y’all should go see it.
And I got that release done. So I guess all that is left is exercising and relaxing.
Nice.
this is exciting project
but i am not touching until GCC is supported
https://github.com/ponylang/ponyc/issues/2079
GCC is supported everywhere but Windows. We’d love it if you wanted to help with getting GCC working on Windows with Pony.
Honestly I can and would like to help but I am pretty busy:
Http://GitHub.com/cup
Maybe in a month or so if it’s not done by then
That would be awesome.
Out of curiosity, why?
you can get GCC with about 26 MB worth of downloads. Visual Studio is about 2 orders of magnitude above that. in addition i dont believe the Visual Studio compiler is open source.
so until that changes i will use and stick with projects that use GCC/Clang.
MinGW allows you to cross-compile Windows binary without running Windows.
As @WilhelmVonWeiner mentioned, I would invest in a copy of the The Design and Implementation of the 4.4BSD Operating System or The Design and Implementation of the BSD Operating System.
And I’d suggest starting my writing your own simple operating system kernel. Personally, I moved from there to studying Minix (this was many years ago). Minix was (and I hear still is) great for learning from. It’s different than Linux in that it is a microkernel architecture, that said you will learn a lot from it because it’s easy to read through and understand. Between writing your own and Minix, you’ll be off to a good start.
I still find the source for the various BSDs easier to follow than Linux and would suggest making that your next big move; graduate to a BSD if you will and from there, you could leap to Linux.
One other thing to consider. Picking a less used kernel to start hacking on when you feel comfortable might be a good idea if: you can find folks in that community to mentor you. In the end, the code base matters less than having people who are grateful for your assistance and want to help you learn. In my experience, smaller communities are more likely be ones you can find mentors in. That said, your mileage my vary greatly.
Thanks! Any idea why the source for BSDs is easier to follow?
Not a BSD, but Minix 2 was written with readability as nearly the only goal
I could take guesses based on number of people who are committers and the development process as to why that is the case, but in the end, it would be speculation. I know I’m not alone in this feeling, but I don’t know if I’m in the majority or minority.
Any pros and cons of starting with the 4.4 BSD Operating System vs FreeBSD Operating System book?
I haven’t read the 2nd edition of the book so I can’t comment on that. Sorry.
It’s not about Linux per se, but it does relate to how operating systems work and a similar kernel: The Design and Implementation of the 4.4BSD Operating System. I bought it on the recommendation of John Carmack and the depth it goes into is great. Every chapter also has little quizzes without answers, so you can confirm to yourself you know how the described system components should work.
Great book. Multiple pluses on this one.
Thanks! I’ll give that book a look.
Thanks!
The irony is that he’s now trying to build better tools that use embedded DSLs instead of YAML files but the market is so saturated with YAML that I don’t think the new tools he’s working on have a chance of gaining traction and that seems to be the major part of angst in that thread.
One of the analogies I like about the software ecosystem is yeast drowning in the byproducts of their own metabolic processes after converting sugar into alcohol. Computation is a magical substrate but we keep squandering the magic. The irony is that Michael initially sqauandered the magic and in the new and less magical regime his new tools don’t have a home. He contributed to the code-less mess he’s decrying because Ansible is one of the buggiest and slowest pieces of infrastructure management tools I’ve ever used.
I suspect like all hype cycles people will figure it out eventually because ant used to be a thing and now it is mostly accepted that XML for a build system is a bad idea. Maybe eventually people will figure out that infrastructure as YAML is not sustainable.
What alternative would you propose to DSLs or YAML?
There are plenty of alternatives. Pulumi is my current favorite.
Thanks for bringing Pulumi to my radar, I hadn’t heard of it earlier. It seems quite close to what I’m currently trying to master, Terraform. So I ended up here: https://pulumi.io/reference/vs/terraform.html – where they say
Which to me seemed rather dishonest. Terraform’s way seems much more flexible and doesn’t tie me to Hashicorp if I don’t want that. Pulumi seems like a modern SAAS money vacuum: https://www.pulumi.com/pricing/
The positive side, of course, is that doing many programmatic-like things in Terraform HCL is quite painful, like all non-turing programming tends to be when you stray from the path the language designers built for you … Pulumi handles that obviously much better.
I work at Pulumi. To be 100% clear, you can absolutely manage a state file locally in the same way you can with TF.
The service does have free tier though, and if you can use it, I think you should, as it is vastly more convenient.
You’re welcome to use a local state file the same way as in Terraform.
+100000
I am powerfully tempted to repartition one of my drives and give this a shot.
Do it! :)
I think you should
Is anything in software development predictable enough to make this kind of analysis useful?
No, this is basically a management wish-fulfillment fantasy.
Likewise, I have heard this asserted by every manager I’ve ever worked with and for. No evidence has ever been presented, nor have estimates actually improved over time. (The usual anti-pattern is: estimates get worse over time because every time an estimate turns out to be low, a fix is proposed–“what if we do all-team estimates? what if we update estimates more frequently?”–which inevitably means spending more time on estimates, which means spending less time on development, which means we get slower and slower.)
I personally used to be quite bad at estimating. I’ve worked at it, I’ve gotten much better about estimating. There are things you can do to get much better at it. None of the things you’ve mentioned are ones I think would help. I plan on writing a post about the things I’ve learned and taught others that have helped make estimates more accurate.
That would make great reading.
Are there any existing accounts of effective software schedule estimation you’d recommend?
Two things I would recommend (and will be primary topics of said blog post).
Estimates slipping is usually about not accurately accounting for risk. Waltzing with Bears is a great book on dealing with risk management. The ideas in it might be overkill for many folks but the process of thinking about how you should account for risk and establishing your own practices is invaluable. The book is great even if you only use it as “that seems overblown, what if I…”.
The second is to record your estimates and why you made them. What did you know at the time that you made your estimate. Then, when your estimate is wrong, examine why. What didn’t you account for? When I first started doing this, I realized that most of my estimates were wrong because I didn’t bother to explore the problem enough and that I was being tripped up by not accounting for known knowns. Eventually I got better at that and started getting tripped up by known unknowns (that’s risk). I’ve since adopted some techniques for fleshing our risks when I am estimating and then front loading working on risks. If you think something might take a week or it might take a month, work on that first. Dig in to understand the problem so you can better estimate it. Maybe the problem is way harder than you imagine and you shouldn’t be doing the project at all. This isn’t a new concept but its one that is rarely used. In agile methodologies, its usually called a “spike”.
I’ve worked on projects that spanned over the course of months and we spent a couple weeks on estimation and planning. A big part of that time digging in, understanding the problem better, discussing it and coming up with what we needed to explore more to really understand the project so we could course correct as we went along.
Ooh, a DeMarco book! Will definitely check it out. Thanks!
Please do write this, I need to improve in this area.
The wish-fulfilment does not exist in a vacuum.
Your customers might not be happy with your team constantly running late. Your pre-revenue startup might have a hard time raising investment. Whatever. There are external reasons for why a professional developer must be reliable in his or her estimates to actually get things out the door.
I’ve been changing my opinion on this back and forth. Especially in a pïss-poor startup, where the the biz guys wanted us to skip unit testing to achieve results faster, refusing to estimate was a fuck you. The code base got convoluted, but also dealing with how they represented things was frustrating.
I feel that in those cases the problem runs deeper in how geeks are supposed to be managed. Hell, it could be that estimation starts eating up time because the managers drove the geeks into protesting, which is - of course - as unprofessional as delivering late. Still you need to sort out your org for smooth ops before taking care of estimates.
Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.
The fact that customers want something, or that it is necessary to the business’s success, do not cause it to become possible. I’m not disputing the desirability of accurate estimates. I’m disputing the idea that they are possible. I have not seen any team or technique generate such estimates reliably, over time, in various circumstances. (Of course, like any gambler with a “system,” sometimes people making estimates get lucky and they turn out to be right by coincidence.) I have heard many managers claim to have a system for reliable estimates which worked in some previous job; none was able to replicate that success on the teams I observed directly.
(It’s not just software, either. Many people point to the building trades as an example of successful estimation and scheduling. In my experience maintaining and restoring an old house and the experiences of friends and acquaintances who’ve undertaken more ambitious restorations, this is more wishful thinking. It’s common for estimates by restoration contractors on larger jobs to be off by months or years, and vast amounts of money. If so mature an industry can’t manage reliable scheduling, what hope is there for us?)
I’d argue that is exactly what much software development is like (except there is no substitute for the customer).
Maybe I’d like to be more optimistic about learning to estimate better ;) But for sure @SeanTAllen touched on a lot of pertinent points. Is it Alice or Bob who gets the task? How well is the problem space, the code, known? And so on.
It’s hard as balls, and you’re not wrong with your gambler analogy, but not all systems for getting things right occasionally are equally unlikely to succeed. Renovators also learn what to look out for and how long those issues tend to take, as well as the interactions. Probabilities are usually ok by customers if that’s all you got.
In my car analogy, the point kinda was that we’re screwed because we can’t give out substitutes. We can deliver temporary hacks, although nothing is as permanent as a temporary hack.
A lot of activities in software development try to improve predictability. For example: following style guidelines, continuous integration, unit testing, etc. All of these have a cost and slow developers down. The upside of course is to reduce bugs which will slow you down much more later. Or maybe not. The risk is generally too high, so we generally prefer the predictability of a slow incremental process.
I have a feeling that Demings thought about that when he talked about “variation”, but I need to read more from the father of Lean and grandfather of Agile to understand that. Currently, I believe that I don’t assign quite the correct meaning to the words I read from him.
“When pursuing a vertical scaling strategy, you will eventually run up against limits. You won’t be able to add more memory, add more disk, add more “something.” When that day comes, you’ll need to find a way to scale your application horizontally to address the problem.”
I should note the limit last I checked was SGI UV’s for data-intensive apps (eg SAP Hana) having 64 sockets with multicore Xeons, 64TB RAM, and 500ns max latency all-to-all communication. I swore one had 256 sockets but maybe misremembering. Most NUMA’s also have high-availability features (eg “RAS”). So, if it’s one application per server (i.e. just DB), many businesses might never run into a limit on these things. The main limit I saw studying NUMA machines was price: scaling up cost a fortune compared to scaling out. One can get stuff in low-to-mid, five digits now that previously cost six to seven. Future-proofing scale up by starting with SGI-style servers has to be more expensive, though, than scale-out start even if scale-out starts on a beefy machine.
You really should modify the article to bring pricing up. The high price of NUMA machines was literally the reason for inventing Beowful clusters which pushed a lot of the “spread it out on many machines” philosophy towards mainstream. The early companies selling them always showed the price of eg a 64-256 core machine by Sun/SGI/Cray vs cluster of 2-4 core boxes. First was price of mini-mansion (or castle if clustered NUMA’s) with second ranging from new car to middle-class house. HA clustering goes back further with VMS, NonStop, and mainframe stuff. I’m not sure if cost pushed horizontal scaling for fault-tolerance to get away from them or if folks were just building on popular ecosystems. Probably a mix but I got no data.
“The number of replicas, also known as “the replication factor,” allows us to survive the loss of some members of the system (usually referred to as a “cluster”). “
I’ll add that each could experience the same failure, esp if we’re talking attacks. Happened to me in a triple, modular redundancy setup with a single component faulty. On top of replication, I push hardware/software diversity much as one’s resources allow. CPU’s built on different tools/nodes. Different mobo’s and UPS’s. Maybe optical connections if worried about electrical stuff. Different OS’s. Different libraries if they perform identical function. Different compilers. And so on. The thing that’s the same on each node is one app you’re wanting to work. Even it might be several written by different people with cluster having a mix of them. The one thing that has to be shared is the protocol for starting it all up, syncing the state, and recovering from problems. Critical layers like that should get the strongest verification the team can afford with SQLite and FoundationDB being the exemplars in that area.
Then, it’s really replicated in a fault-isolating way. It’s also got a lot of extra failure modes one has to test for. Good news is several companies and/or volunteers can chip in each working on one of the 3+ hardware/software systems. Split the cost up. :)
Those are planned subjects for future posts. I wanted to keep this one fairly simplistic for folks who aren’t experts in the area.
That makes sense. Thanks for clarifying.
I rarely get hyped by a talk, but this one definitely spoke to me. This is awesome work!
I spent the day beforehand trying to get Edwin to change his talk to be about the rules of Cricket. Probably best that he ignored me.
As an american that has looked glancingly at cricket rules, I am happy this was ignored. >.<