The Pony project is primarily using:
We still have a few older TravisCI, Appveyor, and CircleCI tasks around, but those are all being deprecated.
GitHub actions are very nice for us as it is much easier to automate portions of our workflow than it was with “external” CI services.
CirrusCI is awesome because… we can test Linux, Windows, macOS, and FreeBSD. But most importantly, we have CI jobs that build LLVM from scratch and CirrusCI allows use to get 8 CPU machines for the job. The only CI services we tried that have been able to handle the jobs that build LLVM are CircleCI and CirrusCI.
I switched a project from a self-hosted Buildbot to CirrusCI and I’ve also found it to be great. It’s so useful that they can spin up builders on demand and do so on your own GCP account.
Aside from a number of settings that I’ve changed. I use the following extensions.
Some settings changes:
Theme:
Language:
File type support:
Other:
I started as a programmer in the early 90s. During that time, the only Windows based computer I ever owned was a dedicated one that ran Cubase for recording. Despite that, about 3 weeks ago I switched off of MacOS after 12 years and got a Surface Book 2 that I’ve been using with WSL2. There are some weird and hinky things with Windows 10 that bother me but all in all, WSL2 is a better development experience than MacOS was.
When I got the Surface Book, it was because I also need some software that made Linux a difficult choice. I told myself that if I didn’t need that software, I would be using Linux if I could find a good Linux laptop with a trackpad I can stand (I’m rather picky on the trackpad front). Now though, I’m not so sure.
The workflow of being able to spin up different Linux environments really quickly and be able to throw them away has been a wonderful development experience. Windows itself isn’t really in my mind worse than MacOS, it’s just bad in different ways.
At this point, Windows with WSL2 might end up being my primary development environment for a long time. Which, strikes me as the weirdest thing I could say because Windows was basically a giant unknown foreign entity for me. I’ve been a Unix*n developer for over 25 years and the idea of using Windows still seems foreign to me.
¯_(ツ)_/¯
I think part of it might be how Apple seems to have thrown away their investment into a platform for creators/hackers in favor of going all in for consumer-oriented platforms. They should’ve done both. They had the money to make their Mac OS the platform that all the best creators would want. They literally could’ve bought most of the major companies creators used, ensured their products were most optimized for Mac OS, made the best hardware that was simultaneously optimize for those companies’ products, and kept the creators coming. They’d have the lead on consumer stuff, creator stuff, and still plenty of money to throw at shareholders.
Instead, the company whose co-founder was great at making billions off of consumers totally wasted their opportunities on the creators that made it what it was. They might still turn it around if they can somehow connect the dots.
Perhaps.
At this point I’ve disliked all the Macbook Pro hardware since 2015 and starting with Sierra, I found MacOS to be a constant series of crashes for me. So, they have a lot of ground to make up in my mind. Not that they seem to care.
I don’t even like Apple and I still cannot upvote this enough. The iPod was the beginning of the end of the Apple that was worth loving because it showed that there was far, far more money in making classy, expensive consumer goods than in making high quality tech tools.
Sorry, but the iPod was fantastic. It did it’s job really very well, and it felt well-made and of high quality.
The reason the newer MacBooks suck so bad is because they are not fit for purpose for many of our peers. A laptop that ceases to function when a spec of dust wriggles its way into the keyboard is not fit for purpose.
A well-designed product needs to do very well the thing that it was designed to do. The iPod did that. The newer MacBooks do not.
They literally could’ve bought most of the major companies creators used
Oh god please no. Why and how would this have been a good thing!?
It was the creators platform for a while. High-end software for pictures, audio, and video was targeted to it. Them building on the momentum they had with that audience would’ve kept money coming in with fewer reasons for people to use Windows. Instead, they were pissing off their own customers with the Windows PC getting better for those same customers all the time.
It doesn’t look like it rebounded much from Microsoft doing the Metro disaster or turning Windows into a surveillance platform, either. Some folks would like an alternative to Windows with as much usability, high-quality apps, and hardware drivers designed for it.
I thought you must’ve been playing devil’s advocate. You’re saying you want the purveyor of an ecosystem to buy up all the players in its ecosystem? I don’t think that’s ever been Apple’s forté, and if they messed up the platform they would’ve definitely messed up a play like this. I think there are other, significantly more prudent ways to invest in your ecosystem.
I’m basically saying they shouldve invested in their creator-oriented platform in all the ways big companies do. That includes acquiring good brands in various segments followed up with ensuring they work great on Mac and integrate well with other apps.
They can otherwise let them do their thing. Apple isn’t good at that. It would’ve helped them, though.
Although Apple might have had dominated the creators market. Maybe they know something the rest of us don’t. The reason why they’re building 6000$ computers and consumer-oriented laptops might be because that’s where the money is right now. https://www.youtube.com/watch?v=KRBIH0CA7ZU
For reference (since I was interested and looked it up), wikipedia says the SurfaceBook 2 has:
- two USB 3.0 Generation 1 ports
- a USB-C port
- 3.5mm headphone jack
- full-size SD card slot on the left
- two SurfaceConnect ports (one in the tablet, one in the base)
ref: https://en.wikipedia.org/wiki/Surface_Book_2
https://shru.gg/r might be of service
What’s the new workflow to use regex, glob, crypto packages? Do these “just work” if installed from source with the Pony package manager?
Not quite. Not yet anyway.
If you have the proper library be it openssl or pcre2 installed then bringing in regex, crypto, or net-ssl will work.
Glob has a transitive dependency on regex do you need to tell the package manager to bring in both.
Carl Quinn has been working on a new package manager that handles transitive dependencies. With it, the situation is better.
But neither of the package managers that exist will help with the c library issue .
Or plan is to deprecate the existing package manager, stable, and replace it with the much more full featured one, corral, that Carl has been working on.
We are still figuring out the system library issue but hope to address that using corral as well.
Interested in hearing other views. But I think what they are doing is reasonable.
Can this be extrapolated into a ‘BLISS’ principle: ‘Buy License if SaaS’ (just came up with abbreviation :-) )
“.. The one and only thing that you cannot do is offer a commercial version of CockroachDB as a service without buying a license. ..”
They should probably provide some examples of what they consider a CockroachDB service, vs a service that’s using CockroachDB underneath.
agreed. copying my comment over from hn:
this seems like an excellent licence, clearly spelling out the intent of the copyright, rather than trying to fashion a one-size-fits-all set of rules. it reminds me of cory doctorow’s point that, intuitively, if some community theatre wanted to dramatise one of his works, they should be able to just do so, but if a major hollywood studio wanted to film it they should require a licence, and it is hard to draft a copyright law that does this properly.
Can this be extrapolated into a ‘BLISS’ principle: ‘Buy License if SaaS’
It can be. The question is not whether someone could do a thing, it’s whether they should do a thing.
And the answer to that question is: Cockroach Labs itself wants to offer CockroachDB as SaaS, and they see it as absolutely necessary that they have the exclusive right to decide whether anyone else can do that and charge money for the privilege. Fair enough, they hold the copyright on the software (presumably) and can relicense it as they wish.
But what happens to Cockroach Labs’ SaaS offering if every other component of the stack they run on adopts the same license and says “free but only if you’re not a for-profit SaaS”? If they have to pay dozens or, more likely, hundreds of separate license fees for the privilege of using all the other open-source components they depend on?
The answer is Cockroach Labs would not be in the SaaS business for very long after that, because they wouldn’t be able to turn a profit in such a world. The categorical imperative catches up to people. And the real result would be everybody forking from the last genuinely open-source version and routing around the damage that way.
But what happens to Cockroach Labs’ SaaS offering if every other component of the stack they run on adopts the same license and says “free but only if you’re not a for-profit SaaS”?
but cockroachdb, as far as i can make out, is not doing this - they’re saying “free, unless you’re a for-profit cockroach-db-as-a-saas”, that is, if what you are selling is a hosted version of cockroachdb itself, rather than some other saas product that happens to use cockroach as a backend.
Right. So assuming that Cockroach Labs offers no services except CockroachDB-as-a-service and a support line, Cockroach Labs would not have to pay for any additional licenses if all dependencies in their software stack switched to CockroachDB’s new license.
I think very few companies would be harmed if this license became prevalent. (I make no statement on the worth of the services of the few companies that would be harmed by such mass relicensing.)
But most of the deps of CockroachDB aren’t created by corporations who need to monetize them directly.
Exactly. I think different kinds of projects end up preferring different kinds of licenses, for good reasons:
Of course not everyone will agree with my philosophy here, but I think it’s good and much more productive than “I hate GPL” / “I hate permissive” / “the anti-SaaS stuff is destroying all FOSS ever”. You don’t have to attach yourself personally to a kind of license, you can adopt a philosophy of “different licenses for different kinds of projects”.
core infrastructure — libraries, runtimes, kernels, compilers — permissive and public domain-ish — because “stuff you were going to write anyway”,
I don’t think that’s true given the value that great infrastructure can provide, esp with good ecosystem. The mainframe companies, VMS Inc, Microsoft, and Apple all pull in billions of dollars selling infrastructure. The cloud companies sell customized and managed versions of open infrastructure. The vendors I reference making separation kernels, safety-critical runtimes, and certifying compilers are providing benefits you can’t get with most open code. Moreover, stuff in that last sentence costs more to make both in developer expertise and time.
I think suppliers should keep experimenting with new licenses for selling infrastructure. These new licenses fit that case better than in the past. If not open, then shared source like Sciter has been doing a long time. I’d still like to see shared source plus paying customers allowed to make unsupported forks and extensions whose licenses can’t be revoked so long as they pay. That gets really close to benefits of open source.
Of course there’s still companies selling specialized, big, serious things. But FOSS infrastructure has largely won outside of these niches. Linux is everywhere, even in smart toilets and remote controlled dildos :D Joyent has open sourced their whole cloud stack. Google has open sourced Bazel, Kubernetes, many frontend frameworks… Etc. etc.
shared source plus paying customers allowed to make unsupported forks and extensions whose licenses can’t be revoked so long as they pay
IIRC that’s the Unreal Engine 4 model. It’s.. better than hidden source proprietary I guess.
separation kernels, safety-critical runtimes, and certifying compilers are providing benefits you can’t get with most open code
I’ve heard of some of these things.. but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.
“ But FOSS infrastructure has largely won outside of these niches. “
Free stuff that works well enough is hard to argue with. So, FOSS definitely wins by default in many infrastructure settings.
“but they’ve been FOSS mostly. NOVA: GPLv2. Muen: GPLv3. seL4: mix of BSD and GPLv2. CompCert: mix of non-commercial and GPLv2.”
They’ve (pdf) all been cathedral-style, paid developments by proprietary vendors or academics. A few became commercial products. A few were incidentally open-sourced with one, Genode, having some community activity. seL4 may have some. Most seL4-based developments are done by paid folks that I’ve seen. The data indicates the best results come in security-focused projects when qualified people were paid to work on the projects. The community can do value-adds, shake bugs out, help with packaging/docs, translate, etc. The core design and security usually requires a from core team of specialists, though. That tends to suggest paid models with shared source or a mix that includes F/OSS are best model to incentivize further development.
“and remote controlled dildos :D “
There’s undoubtedly some developer that got laid off from their job shoving Windows CE or Symbian into devices that were once hot who dreamed of building bigger, better, and smarter dildos that showed off what their platforms had. The humiliation that followed wasn’t a smiling matter, Sir. For some, it may have not been the first time either.
cathedral-style, paid developments by proprietary vendors or academics
Yes, the discussion was about licensing, not community vs paid development. For this kind of project, I don’t see how non-FOSS shared source licensing would benefit anyone.
Individuals outside business context could use, inspect, and modify the product for anywhere from cheap to free. Commercial users buy a license that’s anything from cheap to enterprise-priced. The commercial use generates revenues that pay the developers. Project keeps getting focused work by talented people. Folks working on it might also be able to maintain work-life balance. If 40-hr workweek, then they have spare time and energy for other projects (eg F/OSS). If mix of shared-source and F/OSS, a percentage of the funds will go to F/OSS.
I think that covers a large number of users with acceptable tradeoffs. Harder to market than something free. The size of the security and privacy markets makes me think someone would buy it.
Yes, people love getting things for free.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free. This is a choice they have the legal right to make, of course, but that doesn’t necessarily make it the right decision.
From a business perspective, it’s a very bad sign. A company suddenly switching from open source to proprietary/“source available” is usually a company where the vultures are already circling. And mostly it indicates a fundamental problem with the business model; changing the license like this doesn’t fix that problem, and in fact can’t fix it. If demand for CockroachDB is significant enough, other people will fork from the last open-source release and keep it going. If demand for it isn’t significant enough, well, they won’t. And either way, Cockroach Labs probably won’t make back what the VCs invested into it.
From a software-ecosystem perspective, it’s more than a bit hypocritical. Lots of people build and distribute permissive-licensed software, and Cockroach Labs has, if not profited (since they may not be profitable) from it, at least saved significant up-front development cost as a result. If what they wanted was a copyleft-style share-and-share-alike, there were licenses available to let them do that (which, from a business perspective, still would not have saved them). But that’s not really what they wanted (and by “they” I mean the people in a position to impose decisions, which does not mean the engineering team or possibly even the executive team). What they seem to have wanted was to be proprietary from the start, and therefore to have absolute control over who was allowed to compete with them and on what terms. There is no open-source or Free Software license available which achieves that goal; the AGPL comes closest, but still doesn’t quite get there.
And there simply may not have been a business model available for CockroachDB that would satisfy their investors, but Cockroach Labs was founded at a time when it already should have been clear – especially to a founding team of ex-Googlers – where the market was heading with respect to managed offerings for this type of software. They could have tried other options, like putting more work into integrating with cloud providers’ marketplaces, but instead they knowingly signed up to get their lunch eaten, and do in fact appear to have gotten their lunch eaten.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free.
You are hinting that Cockroach Labs are trying to act as freeloaders while ignoring the real elephant in the room: SaaS providers.
You are hinting that Cockroach Labs are trying to act as freeloaders while ignoring the real elephant in the room: SaaS providers.
I’m pointing out the simple fact that Cockroach Labs wants to have the right to build a business on open-source software, but wants to say that other entities shouldn’t have that same right. That’s literally what this comes down to, and literally what their new license tries to say.
Cockroach Labs likes getting things for free, but has decided that they don’t like giving things away for free.
That’s an unfair characterization. The code they use is made by people who like giving stuff away for free. If permissive, they’ve already chosen a license that lets commercial software reuse it without giving back any changes. If copyleft under GPL-like license, there’s already bypasses to sharing like SaaS that they’re implicitly allowing by not using a strong license. They’re also doing this in a market where most users of their libraries freeload. They then release the code under that license knowing all this for whatever reasons they have in mind.
And then Cockroach Labs, whose goal is a mix of profit and public benefit, uses some of the code they were given for free. They modify the license to suit their goals. Each party contributing code should be fine with the result because each one is doing exactly what you’d expect with their licenses and incentives. If anything, CockroachDB is going out of their way to be more altruistic than other for-profit parties. They could be locking stuff up more.
They approve of the “take open-source software and build a business on it without financially supporting all the authors in a sustainable way” approach when it’s them doing it with other people’s software. They don’t approve when it’s Amazon doing it with CockroachDB. You can try to spin it, but that’s really what it comes down to.
And they want control over who’s allowed to compete with them and who’s allowed to use their software for what purposes. That’s fundamentally incompatible with their software being open source, and they’ve finally realized that, but it’s a bit late to be suddenly trying to change to proprietary.
I agree it won’t be open source software when they relicense it. I disagree that there’s any spin. I tell people who want to force contributions or money back to put it in their license with a clause blocking relicensing to non-OSS/FOSS. Yet, the OSS people still keep using licenses or contributing to software with such licenses that facilitate exactly what CockroachDB-like companies are doing.
I don’t see how it’s evil or hypocritical for a for-profit company acting in self-interests to use licensed components whose authors choose knowing it facilitates that. It wasn’t the developers only option. There was a ton of freeloading and hoarding of permissively-licensed components before they made the choice. Developers wanting contributions from selfish parties, esp companies, should use licenses that force like AGPL or Parity. The kinds of companies they gripe about mostly avoid that stuff. This building on permissive licensing and relicensing problem has two causes, not one.
Note: There’s also people that don’t care if companies do that since they’re just trying to improve software they and other people use. Just figured I should mention that in case they’re reading.
I don’t see how it’s evil or hypocritical for a for-profit company acting in self-interests to use licensed components whose authors choose knowing it facilitates that.
It’s not “evil”. But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.
Given their intended business model, CockroachDB probably should have been proprietary from the start. Would’ve avoided this specific headache (but probably still wouldn’t have avoided the problem with the business model they chose).
“But it is at least a bit hypocritical to decide that you’re OK doing something yourself, but not with other people doing it too.” “CockroachDB probably should have been proprietary from the start”
“three years after each release, the license converts to the standard Apache 2.0 license”
Amazon isn’t giving all their stuff away after three years under a permissive, open-source license. What we’re really discussing is a company that will delay open-sourcing code by three years, not just license proprietary software. Every year, they’ll produce more open-source code. It will be three years behind the proprietary, shared-source version everyone can use except for SaaS companies cloning and selling their software. You’re talking like they’re not giving anything back or doing any OSS. They are. It’s just in a way that captures some market value out of it.
In contrast, the people making OSS dependencies usually aren’t doing anything to capture business value out of the code. If anything, they’re not even trying to. They’re directly or indirectly encouraging commercial freeloading with a license that enables it instead of using one that forbids it. So, CockroachDB doesn’t owe them anything or have any incentive to pay. Whereas, CockroachDB’s goal is to make profit on their own work. The goal differences are why there’s no hypocrisy here. It would be different if the component developers were copylefting or charging for CockroachDB’s dependencies with the company not returning code or pirating the components.
but not with other people doing it too
Have you heard anyone at Cockroach Labs say this? Wouldn’t they be able to offer their service based on 3 year old versions of every piece of OSS they use? It seems to me this license would work fine transitively, so there’s no hypocrisy involved.
If they have to pay dozens or, more likely, hundreds of separate license fees for the privilege of using all the other open-source components they depend on?
Sounds good to me. They have had millions of dollars of funding, they can easily pay some money to people who deserve it.
Or we’ll get something like ASCAP, but for software instead of music.
They should probably provide some examples of what they consider a CockroachDB service, vs a service that’s using CockroachDB underneath.
I believe I read somewhere that they considered the user having the ability to freely modify the schema as being “as a service”
Edit: found it
The user of a “CockroachDB as a Service” company, that is (not just a user of CockroachDB in general)
Thx @trousers @johnaj for clarification. I guess, for me this ‘muddied’ waters so to speak.
Say, hypothetically, I have a SaaS that allows my customers to upload logs from IoT devices, and schema (in my DSL) explaining the data, and some SQL-like (but can also be my DSL) queries about their data.
My service is to provide the results of the queries back to them via dashboards/PDFs etc. The hypothetical SaaS charges for that service (and hopes, in some distant future, to make net profit)
Underneath, I want to use CockroachDB.
When customer provides their data explanation in DSL, I actually translate it into CockroachDB schema, and create materialized and non-materialized views (I do not know if the DB supports this, let’s assume – it does). I do that so that customer’s queries can be translated to database statements more easily (and run efficiently).
So I have a SaaS service, and allow customers (although indirectly) to create schema specific to their data in my database.
Will I need license?
From what I am reading right now, I will.
This is not good or bad – but I hope, then, Postgres would never adapt BLISS.
May be I am wrong .. so hope to hear what others think.
Will I need license?
No. I think anything that is indirect (they are not using the wire protocol or directly issuing queries) is not going to require a license.
That said, I can see how your example is demonstrative of a possible problem – if Amazon created like a graphQL layer in front of it that just sort of translated to and from CockroachDB would that give them safety license wise – and I think it would.
Right, there is ambiguity about the ‘type or class’ of layers that when added, will not require a license vs layers that will require a license.
If I correctly understand the spirit and the intent of their license, I actually think CockroachDB should protect themselves, and specify that following layers:
a) security + access control layers
b) performance + scalability layers
c) General (not domain specific) query meta language layers
d) Deployment layers (eg ansible roles on top)
e) Hardware layer underneath (eg optimized FPGA/GPUs)
If a SaaS business added on top of their DB only the above layers in essense, and then sold as SaaS together with CocroachDB – they would need the BLISS license.
Also, at the end of the day, their license may end up being, still, free for some businesses that fall under BLISS – but I think, CockrouchDB team and their investors, want to be in control of that decision…
I think Pony a bit more cumbersome to use than many other languages, at least for a simple “Hello, World!” example in SDL2, but it feels surprisingly solid. https://github.com/xyproto/sdl2-examples/blob/master/pony/main.pony
I will say, this is pony program that uses SDL by directly linking against external c functions! Most SDL hello world examples you’ll see in other languages use a library wrapping the external calls. I think it speaks volumes that in fact the Pony source is both readable and short, especially considering that Pony is a managed language, with support for actors. (In comparison, the C FFI in both Go and Erlang tends to be much harder)
It uses SDL2 directly only because no SDL2 library were available for Pony at the time (I’m not sure if there is one available now).
I just did some exercises in Pony and Rust and I definitely found Pony the more elegant and easy language; but with much worse library support
We definitely have a considerably smaller community than Rust at this point. In part, I think that is:
1- Rust has corporate backing and people paid to work on it 2- It’s post 1.0 and most people won’t consider using pre-1.0 stuff
More people contributing to the Pony ecosystem (including libraries) is something we could really use and would happily support as best we can. We’ve had a lot of excited people start to pitch in. Most haven’t contributed to open source projects before and I think for many, the shine rather quickly rubs off. I don’t blame them, maintaining open source software is… well, it’s an “interesting hobby”.
Absolutely agree. Even contributing to a few projects, I can see that I wouldn’t want to be a maintainer without being paid or “it” being my big idea.
I don’t do much system level programming, so I don’t need either, really, so I’m very unlikely to step up.
A bridge to rust might help though?
A bridge to rust might help though?
Pony can directly call into C (and therefore into Rust) and generate C headers, which Rust can consume to generate interfaces. The biggest problem for “just” using both from both sides is language semantics outside of the function call interface that the C ABI provides. Also, Rusts ABI is currently private.
I don’t do much system level programming, so I don’t need either, really, so I’m very unlikely to step up.
Both Rust and Pony provide safety guarantees and features way past “safe systems programming” and Rust is definitely used as a “wicked fast Python” in some circles. It’s an interesting space to work in, currently :).
I really like the syntax, and the underlying ideas. I recently speed read through the tutorial, and the most daunting aspect of it (for me) was the reference capabilities part. I hope I can find a side project to play with it some more.
Plus the language is named pony, which makes it instantly great. ;)
How big is your production environment? Is it realistic and affordable to run a regularly updated “exact copy”?
The big question: why not port something else like pony or cloud haskell? Or why not add static types to something like erlang itself?
I’m not sure what you mean about Pony or Cloud Haskell but I have some answers to why not Erlang.
Static typing Erlang is a different and considerably more difficult problem than making a compatible soundly typed language. Realistically for a typer for Erlang to get adoption it needs to be flexible enough to allow it to be applied incrementally to existing Erlang codebases, which means gradual typing. Gradual typing offers a very different set of guarantees to those that Gleam offer. By default it’s unsafe, which isn’t what I wanted.
There’s also some more human issues such as my not being part of Ericsson, so I would have little to no ability to make use or adapt the existing compile, I would have to attempt to be compatible with their compiler. If we were successful in that the add-on nature means it becomes more of a battle to get it adopted in Erlang projects, something that I would say the officially supported Dialyzer has struggled to do.
Overall I don’t believe it would be an achievable goal for me to add static types to Erlang, but making a new language is very achievable. It also gives us an opportunity to do things differently from Erlang- I love Erlang, but it certainly has some bits that I like less.
I’m not sure what the point of Pony on BEAM would be. Pony’s reference capabilities statically prove that “unsafe things” are safe to do. BEAM wouldn’t allow you to take advantage of that. Simplest example: message passing would result in data copying. This doesn’t happen in the Pony runtime. You’d be better of using one of the languages that runs on BEAM including ones with static typing.
Thanks for Pony and thanks for the reply!
As you might imagine I’m very interested in how Pony types actors. I skimmed the documentation but many aspects such as any supervision trees were unclear to me (likely my fault). Are there any materials on the structuring Pony applications you could recommend? Thank you :)
There are no supervision trees at this time in Pony. How you would go about doing that is an interesting open question.
Because Pony is statically typed and you can’t have null references, the way you send a message to an actor is by holding a “tag” reference to that actor and calling behaviors on it. This has several advantages but, if you were try to do Erlang style “let it crash”, you run into difficulties.
What does it mean to “let it crash”? You can’t invalidate the references that actors hold to this actor. You could perhaps “reset” the actors in some fashion but there’s nothing built into the Pony distribution to do that, nor are the mechanics something anyone has solid ideas about.
There aren’t a lot of materials on structuring Pony applications I can recommend at this time. I’d suggest stopping by our Zulip and starting up a conversation in the beginner help stream: https://ponylang.zulipchat.com/#narrow/stream/189985-beginner-help
AFAIK there’s still the issue with read-only access for public builds which I’d regard as a blocker for public project usage https://github.com/buildkite/feedback/issues/137
I’m really glad you pointed this out as I’ve been planning on transitioning some of the Pony CI over to buildkite and it would have sucked when I found out about this.
Apparently they’ve now fixed this, as per @j605’s link below, and the ticket has now been closed out as I’ve pointed this out to upstream.
Is there a reason to consider this instead of something like 1Password or Bitwarden? I read the FAQ and it seems like it is the same except tied to Firefox.
Dunno about their offerings, but what I personally like, about lockbox is that it’s got super friendly ux (caters for simplicity of use rather than us nerds :-)). I already use Firefox Sync on all my devices but the Android autofill API allows using it for apps too.
Lockbox is also free of charge, free software and encrypts data on the client, not in the cloud.
If I follow correctly, you are saying its encrypted at the client and then sync’d to the cloud. Correct?
1Password or Bitwarden
Those are proprietary, and I would assume the Lockbox thing isn’t?
1Password is proprietary, bitwarden isn’t. oops.
Bitwarden is open source:
Have to say, my experience of Bitwarden has been nothing but positive. Much prefer to alternatives like lastpass
Us was really awesome. Y’all should go see it.
And I got that release done. So I guess all that is left is exercising and relaxing.
Nice.
this is exciting project
but i am not touching until GCC is supported
GCC is supported everywhere but Windows. We’d love it if you wanted to help with getting GCC working on Windows with Pony.
Honestly I can and would like to help but I am pretty busy:
Maybe in a month or so if it’s not done by then
you can get GCC with about 26 MB worth of downloads. Visual Studio is about 2 orders of magnitude above that. in addition i dont believe the Visual Studio compiler is open source.
so until that changes i will use and stick with projects that use GCC/Clang.
As @WilhelmVonWeiner mentioned, I would invest in a copy of the The Design and Implementation of the 4.4BSD Operating System or The Design and Implementation of the BSD Operating System.
And I’d suggest starting my writing your own simple operating system kernel. Personally, I moved from there to studying Minix (this was many years ago). Minix was (and I hear still is) great for learning from. It’s different than Linux in that it is a microkernel architecture, that said you will learn a lot from it because it’s easy to read through and understand. Between writing your own and Minix, you’ll be off to a good start.
I still find the source for the various BSDs easier to follow than Linux and would suggest making that your next big move; graduate to a BSD if you will and from there, you could leap to Linux.
One other thing to consider. Picking a less used kernel to start hacking on when you feel comfortable might be a good idea if: you can find folks in that community to mentor you. In the end, the code base matters less than having people who are grateful for your assistance and want to help you learn. In my experience, smaller communities are more likely be ones you can find mentors in. That said, your mileage my vary greatly.
I could take guesses based on number of people who are committers and the development process as to why that is the case, but in the end, it would be speculation. I know I’m not alone in this feeling, but I don’t know if I’m in the majority or minority.
Any pros and cons of starting with the 4.4 BSD Operating System vs FreeBSD Operating System book?
It’s not about Linux per se, but it does relate to how operating systems work and a similar kernel: The Design and Implementation of the 4.4BSD Operating System. I bought it on the recommendation of John Carmack and the depth it goes into is great. Every chapter also has little quizzes without answers, so you can confirm to yourself you know how the described system components should work.
The irony is that he’s now trying to build better tools that use embedded DSLs instead of YAML files but the market is so saturated with YAML that I don’t think the new tools he’s working on have a chance of gaining traction and that seems to be the major part of angst in that thread.
One of the analogies I like about the software ecosystem is yeast drowning in the byproducts of their own metabolic processes after converting sugar into alcohol. Computation is a magical substrate but we keep squandering the magic. The irony is that Michael initially sqauandered the magic and in the new and less magical regime his new tools don’t have a home. He contributed to the code-less mess he’s decrying because Ansible is one of the buggiest and slowest pieces of infrastructure management tools I’ve ever used.
I suspect like all hype cycles people will figure it out eventually because ant used to be a thing and now it is mostly accepted that XML for a build system is a bad idea. Maybe eventually people will figure out that infrastructure as YAML is not sustainable.
Thanks for bringing Pulumi to my radar, I hadn’t heard of it earlier. It seems quite close to what I’m currently trying to master, Terraform. So I ended up here: https://pulumi.io/reference/vs/terraform.html – where they say
Terraform, by default, requires that you manage concurrency and state manually, by way of its “state files.” Pulumi, in contrast, uses the free app.pulumi.com service to eliminate these concerns. This makes getting started with Pulumi, and operationalizing it in a team setting, much easier.
Which to me seemed rather dishonest. Terraform’s way seems much more flexible and doesn’t tie me to Hashicorp if I don’t want that. Pulumi seems like a modern SAAS money vacuum: https://www.pulumi.com/pricing/
The positive side, of course, is that doing many programmatic-like things in Terraform HCL is quite painful, like all non-turing programming tends to be when you stray from the path the language designers built for you … Pulumi handles that obviously much better.
I work at Pulumi. To be 100% clear, you can absolutely manage a state file locally in the same way you can with TF.
The service does have free tier though, and if you can use it, I think you should, as it is vastly more convenient.
In a burndown chart, I want to be able to run simulations as well. “What happens to this project if X work falls behind. What happens if Bob gets sick?”
Is anything in software development predictable enough to make this kind of analysis useful?
No, this is basically a management wish-fulfillment fantasy.
Seeing that we thought something would take 8 hours of work but we spent 24 is incredibly valuable. We can revisit our assumptions we made when estimating see where we got it wrong. Then, we can try and account for it next time. Yes, estimating is hard but its also a skill you can get better at if you work at it and have support for proper tools.
Likewise, I have heard this asserted by every manager I’ve ever worked with and for. No evidence has ever been presented, nor have estimates actually improved over time. (The usual anti-pattern is: estimates get worse over time because every time an estimate turns out to be low, a fix is proposed–“what if we do all-team estimates? what if we update estimates more frequently?”–which inevitably means spending more time on estimates, which means spending less time on development, which means we get slower and slower.)
I personally used to be quite bad at estimating. I’ve worked at it, I’ve gotten much better about estimating. There are things you can do to get much better at it. None of the things you’ve mentioned are ones I think would help. I plan on writing a post about the things I’ve learned and taught others that have helped make estimates more accurate.
That would make great reading.
Are there any existing accounts of effective software schedule estimation you’d recommend?
Two things I would recommend (and will be primary topics of said blog post).
Estimates slipping is usually about not accurately accounting for risk. Waltzing with Bears is a great book on dealing with risk management. The ideas in it might be overkill for many folks but the process of thinking about how you should account for risk and establishing your own practices is invaluable. The book is great even if you only use it as “that seems overblown, what if I…”.
The second is to record your estimates and why you made them. What did you know at the time that you made your estimate. Then, when your estimate is wrong, examine why. What didn’t you account for? When I first started doing this, I realized that most of my estimates were wrong because I didn’t bother to explore the problem enough and that I was being tripped up by not accounting for known knowns. Eventually I got better at that and started getting tripped up by known unknowns (that’s risk). I’ve since adopted some techniques for fleshing our risks when I am estimating and then front loading working on risks. If you think something might take a week or it might take a month, work on that first. Dig in to understand the problem so you can better estimate it. Maybe the problem is way harder than you imagine and you shouldn’t be doing the project at all. This isn’t a new concept but its one that is rarely used. In agile methodologies, its usually called a “spike”.
I’ve worked on projects that spanned over the course of months and we spent a couple weeks on estimation and planning. A big part of that time digging in, understanding the problem better, discussing it and coming up with what we needed to explore more to really understand the project so we could course correct as we went along.
The wish-fulfilment does not exist in a vacuum.
Your customers might not be happy with your team constantly running late. Your pre-revenue startup might have a hard time raising investment. Whatever. There are external reasons for why a professional developer must be reliable in his or her estimates to actually get things out the door.
I’ve been changing my opinion on this back and forth. Especially in a pïss-poor startup, where the the biz guys wanted us to skip unit testing to achieve results faster, refusing to estimate was a fuck you. The code base got convoluted, but also dealing with how they represented things was frustrating.
I feel that in those cases the problem runs deeper in how geeks are supposed to be managed. Hell, it could be that estimation starts eating up time because the managers drove the geeks into protesting, which is - of course - as unprofessional as delivering late. Still you need to sort out your org for smooth ops before taking care of estimates.
Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.
The fact that customers want something, or that it is necessary to the business’s success, do not cause it to become possible. I’m not disputing the desirability of accurate estimates. I’m disputing the idea that they are possible. I have not seen any team or technique generate such estimates reliably, over time, in various circumstances. (Of course, like any gambler with a “system,” sometimes people making estimates get lucky and they turn out to be right by coincidence.) I have heard many managers claim to have a system for reliable estimates which worked in some previous job; none was able to replicate that success on the teams I observed directly.
(It’s not just software, either. Many people point to the building trades as an example of successful estimation and scheduling. In my experience maintaining and restoring an old house and the experiences of friends and acquaintances who’ve undertaken more ambitious restorations, this is more wishful thinking. It’s common for estimates by restoration contractors on larger jobs to be off by months or years, and vast amounts of money. If so mature an industry can’t manage reliable scheduling, what hope is there for us?)
Yet this isn’t car repair for old vehicles, where you find more and more problems as you go along, making estimates tough without thorough diagnostics, but the customer is happy with the substitute car you gave out in the meantime.
I’d argue that is exactly what much software development is like (except there is no substitute for the customer).
Maybe I’d like to be more optimistic about learning to estimate better ;) But for sure @SeanTAllen touched on a lot of pertinent points. Is it Alice or Bob who gets the task? How well is the problem space, the code, known? And so on.
It’s hard as balls, and you’re not wrong with your gambler analogy, but not all systems for getting things right occasionally are equally unlikely to succeed. Renovators also learn what to look out for and how long those issues tend to take, as well as the interactions. Probabilities are usually ok by customers if that’s all you got.
In my car analogy, the point kinda was that we’re screwed because we can’t give out substitutes. We can deliver temporary hacks, although nothing is as permanent as a temporary hack.
A lot of activities in software development try to improve predictability. For example: following style guidelines, continuous integration, unit testing, etc. All of these have a cost and slow developers down. The upside of course is to reduce bugs which will slow you down much more later. Or maybe not. The risk is generally too high, so we generally prefer the predictability of a slow incremental process.
I have a feeling that Demings thought about that when he talked about “variation”, but I need to read more from the father of Lean and grandfather of Agile to understand that. Currently, I believe that I don’t assign quite the correct meaning to the words I read from him.
Rich has been railing on types for the last few keynotes, but it looks to me like he’s only tried Haskell and Kotlin and that he hasn’t used them a whole lot, because some of his complaints look like complete strawmen if you have a good understanding and experience with a type system as sophisticated as Haskell’s, and others are better addressed in languages with different type systems than Haskell, such as TypeScript.
I think he makes lots of good points, I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory while designing his own type system (clojure.spec), and if he’s not, why he thinks other models don’t work either.
One nit: spec is a contract system, not a type system. The former is often used to patch up a lack of the latter, but it’s a distinct concept you can do very different things with.
EDIT: to see how they can diverge, you’re probably better off looking at what Racket does than what Clojure does. Racket is the main “research language” for contracts and does some pretty fun stuff with them.
It’s all fuzzy to me. They’re both formal specifications. They get overlapped in a lot of ways. Many types people are describing could be pre/post conditions and invariants in contract form for specific data or functions on them. Then, a contract system extended to handle all kinds of things past Boolean will use enough logic to be able to do what advanced type systems do.
Past Pierce or someone formally defining it, I don’t know as a formal, methods non-expert that contract and type systems in general form are fundamentally that different since they’re used the same in a lot of ways. Interchangeably, it would appear, if each uses equally powerful and/or automated logics.
It’s fuzzy but there are differences in practice. I’m going to assume we’re using non-FM-level type systems, so no refinement types or dependent types for full proofs, because once you get there all of our intuition about types and contracts breaks down. Also, I’m coming from a contract background, not a type background. So take everything I say about type systems with a grain of salt.
In general, static types verify a program’s structure, while contracts verify its properties. Like, super roughly, static types are whether a program is sense or nonsense, while contracts are whether its correct or incorrect. Consider how we normally think of tail
in Haskell vs, like, Dafny:
tail :: [a] -> [a]
method tail(s: seq<T>) returns (o: seq<T>)
requires s.Len > 0
ensures s[0] + o = s
The tradeoff is that verifying structure automatically is a lot easier than verifying semantics. That’s why historically static typing has been compile-time while contracts have been runtime. Often advances in typechecking subsumed use cases for contracts. See, for example, how Eiffel used contracts to ensure “void-free programming” (no nulls), which is subsumed by optionals. However, there are still a lot of places where they don’t overlap, such as in loop invariants, separation logic, (possibly existential contracts?), arguably smart-fuzzing, etc.
Another overlap is refinement types, but I’d argue that refinement types are “types that act like contracts” versus contracts being “runtime refinement types”, as most successful uses of refinement types came out of research in contracts (like SPARK) and/or are more ‘contracty’ in their formulations.
Fundamentally? Not really, nor vice versa. Both let you say arbitrary things about a function.
In practice contracts are more popular for industrial work because they so far seem to map better to imperative languages than dependent types do.
That makes sense, thanks! I’ve never heard of them. I mean I’ve probably seen people throw the concept around but I never took it for an actual thing
I see the distinction when we talk about pure values, sum and product types. I wonder if the IO monad for example isn’t kind of more on the contract side of things. Sure it works as a type, type inference algorithms work with it, but the sife-effect thing makes it seem more like a pattern.
I’m just puzzled as to why he’s seemingly ignoring a lot of research in type theory
Isn’t that his thing? He’s made proud statements about his disinterest in theory. And it shows. His jubilation about transducers overlooked that they are just a less generic form of ad-hoc polymorphism, invented to abstract over operations on collections.
wow, thanks for that, never really saw it that way but it totally makes sense. not a regular clojure user, but love lisp, and love the ML family of languages.
So? Theory is useless without usable and easy implementation
No. Let me setup a thought experiment to show the flaw in this argument…
Let’s say this sequence of events happens over time:
By the logic quoted above:
Such a scenario can happen when an implementer thinks something like “Theory X is quite general; I’m not sure I even can understand its terminology; I certainly don’t know how to apply it to my work.” However, once Theory Y comes along, it is easier for implementors to see the value, and, voila, you get enough interest to generate imp(Y).
However, Y could not have happened without X, so X must be valuable too. This leads to a contradiction: Theory X is both useful and useless.
The problem arises out of an overly narrow definition of “useful” (commenter above).
In truth, the word “useful” can mean many things in different contexts. One kind of use is implementation (i.e. a library). Another kind of use is to build additional thinking around it. Yes, I’ll say it – theory is useful. It may be common to bash on “theory”, but this is too often an unfair and misattributed attack.
I don’t see anything of substance in this comment other than “Haskell has a great type system”.
I just watched the talk. Rich took a lot of time to explain his thoughts carefully, and I’m convinced by many of his points. I’m not convinced by anything in this comment because there’s barely anything there. What are you referring to specifically?
edit: See my perspective here: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe
That wasn’t my point at all. I agree with what Rich says about Maybes in this talk, but it’s obvious from his bad Haskell examples that he hasn’t spent enough time with the language to justify criticizing its type system so harshly.
Also, what he said about representing the idea of a car with information that might or might not be there in different parts of a program might be correct in Haskell’s type system, but in languages with structural subtyping (like TypeScript) or row polymorphism (like Ur/Web) you can easily have a function that takes a car record which may be missing some fields, fills some of them out and returns an object which has a bit more fields than the other one, like Rich described at some point in the talk.
I’m interested to see where he’s gonna end up with this, but I don’t think he’s doing himself any favors by ignoring existing research in the same fields he’s thinking about.
But if you say that you need to go to TypeScript to express something, that doesn’t help me as a Haskell user. I don’t start writing a program in a language with one type system and then switch into a language with a different one.
Anyway, my point is not to have a debate on types. My point is that I would rather read or watch an opinion backed up by real-world experience.
I don’t like the phrase “ignoring existing research”. It sounds too much like “somebody told me this type system was good and I’m repeating it”. Just because someone published a paper on it, doesn’t mean it’s good. Plenty of researchers disagree on types, and admit that there are open problems.
There was just one here the other day!
https://lobste.rs/s/dldtqq/ast_typing_problem
I’ve found that the applicability of types is quite domain-specific. Rich Hickey is very clear about what domains he’s talking about. If someone makes general statements about type systems without qualifying what they’re talking about, then I won’t take them very seriously.
seemingly ignoring a lot of research in type theory
I’ve come to translate this utterance as “it’s not Haskell”. Are there languages that have been hurt by “ignoring type theory research”? Some (Go, for instance) have clearly benefited from it.
I don’t think rich is nearly as ignorant of Haskell’s type system as everyone seems to think. You can understand this stuff and not find it valuable and it seems pretty clear to me that this is the case. He’s obviously a skilled programmer who’s perspective warrants real consideration, people who are enamored with type systems shouldnt be quick to write him off even if they disagree.
I don’t like dynamic languages fwiw.
I dont think we can assume anything about what he knows. Even Haskellers here are always learning about its type system or new uses. He spends most of his time in a LISP. It’s safe to assume he knows more LISP benefits than Haskell benefits until we see otherwise in examples he gives.
Best thing tl do is probably come up with lot of examples to run by him at various times/places. See what says for/against them.
I guess I would want hear what people think he’s ignorant of because he clearly knows the basics of the type system, sum types, typeclasses, etc. The clojure reducers docs mention requiring associative monoids. I would be extremely surprised if he didn’t know what monads were. I don’t know how far he has to go for people to believe he really doesn’t think it’s worthwhile. I heard edward kmett say he didn’t think dependent types were worth the overhead, saying that the power to weight ratio simply wasn’t there. I believe the same about haskell as a whole. I don’t think it’s insane to believe that about most type systems and I don’t think hickey’s position stems from ignorance.
Good examples supporting he might know the stuff. Now, we just need more detail to further test the claims on each aspect of languge design.
From the discussions I see, it’s pretty clear to me that Rich has a better understanding of static typing and its trade offs than most Haskell fans.
I’d love to hear in a detailed fashion how Go has clearly benefited from “ignoring type theory research”.
Rust dropped GC by following that research. Several languages had race freedom with theirs. A few had contracts or type systems with similar benefits. Go’s developers ignored that to do a simpler, Oberon-2- and C-like language.
There were two reasons. dmpk2k already said first, which Rob Pike said, that it was designed for anyone from any background to pick up easily right after Google hired them. Also, simplicity and consistency making it easy for them to immediately go to work on codebases they’ve never seen. This fits both Google’s needs and companies that want developers to be replaceable cogs.
The other is that the three developers had to agree on every feature. One came from C. One liked stuff like Oberon-2. I dont recall the other. Their consensus is unlikely to be an Ocaml, Haskell, Rust, Pony, and so on. It was something closer to what they liked and understood well.
If anything, I thought at the time they shouldve done something like Julia with a mix of productivity features, high C/Python integration, a usable subset people stick to, and macros for just when needed. Much better. I think a Noogler could probably handle a slighty-more-advanced language than Go. That team wanted otherwise…
I have a hard time with a number of these statements:
“Rust dropped GC by following that research”? So did C++ also follow research to “drop GC”? What about “C”? I’ve been plenty of type system conversation related to Rust but nothing that I would attribute directly to “dropping GC”. That seems like a bit of a simplification.
Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared? I’ve seen Rob Pike talk about wanting to appeal to C and C++ programmers but nothing about ignorning type research. I’d be interested in hearing about that being done and what they thought the benefits were.
It sounds like you are saying that the benefit is something familiar and approachable. Is that a benefit to the users of a language or to the language itself? Actually I guess that is more like, is the benefit that it made Go approachable and familiar to a broad swath of programmers and that allowed it to gain broad adoption?
If yes, is there anything other than anecdotes (which I would tend to believe) to support that assertion?
“That seems like a bit of a simplification.”
It was. Topic is enormously complex. Gets worse when you consider I barely knew C++ before I lost my memory. I did learn about memory pools and reference counting from game developers who used C++. I know it keeps getting updated in ways that improve its safety. The folks that understand C++ and Rust keep arguing about how safe C++ is with hardly any argument over Rust since its safety model is baked thoroughly into the language rather than an option in a sea of options. You could say I’m talking about Rust’s ability to be as safe as a GC in most of an apps code without runtime checks on memory accesses.
“Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”
Like with the Rich Hickey replies, this burden of proof is backwards asking us to prove a negative. If assessing what people knew or did, we should assume nothing until we see evidence in their actions and/or informed opinions that they did these things. Only then do we believe they did. I start by comparing what I’ve read of Go to Common LISP, ML’s, Haskell, Ada/SPARK, Racket/Ometa/Rascal on metaprogramming side, Rust, Julia, Nim, and so on. Go has almost nothing in it compared to these. Looks like a mix of C, Wirth’s stuff, CSP like old stuff in 1970’s-1980’s, and maybe some other things. Not much past the 1980’s. I wasn’t the first to notice either. Article gets point across despite its problems the author apologized for.
Now, that’s the hypothesis from observation of Go’s features vs other languages. Lets test it on intent first. What was the goal? Rob Pike tells us here with Moray Taylor having a nicer interpretation. The quote:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.
So, they’re intentionally dumbing the language down as much as they can while making it practically useful. They’re doing this so smart people from many backgrounds can pick it up easily and go right to being productive for their new employer. It’s also gotta be C-like for the same reason.
Now, let’s look at its prior inspirations. In the FAQ, they tell you the ancestors: “Go is mostly in the C family (basic syntax), with significant input from the Pascal/Modula/Oberon family (declarations, packages), plus some ideas from languages inspired by Tony Hoare’s CSP, such as Newsqueak and Limbo (concurrency).” They then make an unsubstantiated claim, in that section at least, that it’s a new language across the board to make programming better and more fun. In reality, it seems really close to a C-like version of the Oberon-2 experience one developer (can’t recall) wanted to recreate with concurrency and tooling for aiding large projects. I covered the concurrency angle in other comment. You don’t see a lot of advanced or far out stuff here: decades old tech that’s behind current capabilities. LISP’ers, metaprogrammers and REBOL’s might say behind old tech, too. ;)
Now, let’s look at execution of these C, Wirth-like, and specific concurrency ideas into practice. I actually can’t find this part. I did stumble upon its in-depth history of design decisions. The thing I’m missing, if it was correct, is a reference to the claim that the three developers had to agree on each feature. If that’s true, it automatically would hold the language back from advanced stuff.
In summary, we have a language designed by people who mostly didn’t use cutting-edge work in type systems, employed nothing of the sort, looked like languages from the 1970’s-1980’s, considered them ancestors, is admittedly dumbed-down as much as possible so anyone from any background can use it, and maybe involved consensus from people who didn’t use cutting-edge stuff (or even much cutting-edge at 90’s onward). They actually appear to be detractors to a lot of that stuff if we consider the languages they pushed as reflecting their views on what people should use. Meanwhile, the languages I mentioned above used stuff from 1990’s-2000’s giving them capabilities Go doesn’t have. I think the evidence weighs strongly in favor of that being because designers didn’t look at it, were opposed to it for technical and/or industrial reasons, couldn’t reach a consensus, or some combo.
That’s what I think of Go’s history for now. People more knowledgeable feel free to throw any resources I might be missing. It just looks to be a highly-practical, learn/use-quickly, C/Oberon-like language made to improve onboarding and productivity of random developers coming into big companies like Google. Rob Pike even says that was the goal. Seems open and shut to me. I thank the developers of languages like Julia and Nim believing we were smart enough to learn a more modern language, even if we have to subset them for inexperienced people.
Sorry, that isn’t detailed. Is there evidence that its easy for these programmers to pick up? What does “easy to pick up” mean? To get something to compile? To create error-free programs? “Clearly benefited” is a really loaded term that can mean pretty much anything to anyone. I’m looking for what the stated benefits are for Go. Is the benefit to go that it is “approachable” and “familiar”?
There seems to be an idea in your statement then that using any sort of type theory research will inherintly make something hard to pick up. I have a hard time accepting that. I would, without evidence, be willing to accept that many type system ideas (like a number of them in Pony) are hard to pick up, but the idea that you have to ignore type theory research to be easy to pick up is hard for me to accept.
Could I create a language that ignores type system theory but using a non-familiar syntax and not be easy to pick up?
I already gave you the quote from Pike saying it was specifically designed for this. Far as the how, I think one of its designers explains it well in those slides. The Guiding Principles section puts simplicity above everything else. Next, a slide says Pascal was a minimalist language designed for teaching non-programmers to code. Oberon was similarly simple. Oberon-2 added methods on records (think simpler OOP). The designer shows Oberon-2 and Go code saying it’s C’s syntax with Oberon-2’s structure. I’ll add benefits like automatic, memory management.
Then, the design link said they chose CSP because (a) they understood it enough to implement and (b) it was the easiest thing to implement throughout the language. Like Go itself, it was the simplest option rather than the best along many attributes. There were lots of people who picked up SCOOP (super-easy but with overhead) with probably even more picking up Rust’s method grounded in affine types. Pony is itself doing clever stuff using advances in language. Go language would ignore those since (a) Go designers didn’t know them well from way back when and (b) would’ve been more work than their intent/budget could take.
They’re at least consistent about simplicity for easy implementation and learning. I’ll give them that.
It seems to me that Go was clearly designed to have a well-known, well-understood set of primitives, and that design angle translated into not incorporating anything fundamentally new or adventurous (unlike Pony and it’s impressive use of object capabilities). It looked already old at birth, but it feels impressively smooth, in the beginning at least.
I find it hard to believe that CSP and Goroutines were “well-understood set of primitives”. Given the lack of usage of CSP as a mainstream concurrency mechanism, I think that saying that Go incorporates nothing fundamentally new or adventurous is selling it short.
CSP is one of oldest ways people modeled concurrency. I think it was built on Hoare’s monitor concept from years before which Per Brinch Hansen turned into Concurrent Pascal. Built Solo OS with mix of it and regular Pascal. It was also typical in high-assurance to use something like Z or VDM for specifying main system with concurrency done in CSP and/or some temporal logic. Then, SPIN became dominant way to analyze CSP-like stuff automatically with a lot of industrial use for a formal method. Lots of other tools and formalisms existed, though, under banner of process algebras.
Outside of verification, the introductory text that taught me about high-performance, parallel computing mentioned CSP as one of basic models of parallel programming. I was experimenting with it in maybe 2000-2001 based on what those HPC/supercomputing texts taught me. It also tied into Agent-Oriented Programming I was looking into then given they were also concurrent, sequential processes distributed across machines and networks. A quick DuckDuckGo shows a summary article on Wikipedia mentions it, too.
There were so many courses teaching and folks using it that experts in language design and/or concurrency should’ve noticed it a long time ago trying to improve on it for their languages. Many did, some doing better. Eiffel SCOOP, ML variants like Concurrent ML, Chapel, Clay with Wittie’s extensions, Rust, and Pony are examples. Then you have Go doing something CSP-like (circa 1970’s) in the 2000’s still getting race conditions and stuff. What did they learn? (shrugs) I don’t know…
Nick,
I’m going to take the 3 different threads of conversation we have going and try to pull them all together in this one reply. I want to thank you for the time you put into each answer. So much of what appears on Reddit, HN, and elsewhere is throw away short things that often feel lazy or like communication wasn’t really the goal. For a long time, I have appreciated your contributions to lobste.rs because there is a thoughtfulness to them and an attempt to convey information and thinking that is often absent in this medium. Your replies earlier today are no exception.
Language is funny.
You have a very different interpretation of the words “well-understood primitives” than I do. Perhaps it has something to do with anchoring when I was writing my response. I would rephrase my statement this way (and I would still be imprecise):
While CSP has been around for a long time, I don’t that prior to Go, that is was a well known or familiar concurrency model for most programmers. From that, I would say it isn’t “well-understood”. But I’m reading quite a bit, based on context into what “well-understood” means here. I’m taking it to me, “widely understood by a large body of programmers”.
And I think that your response Nick, I think it actually makes me believe that more. The languages you mention aren’t ones that I would consider familiar or mainstream to most programmers.
Language is fun like that. I could be anchoring myself again. I rarely ask questions on lobste.rs or comment. I decided to on this occasion because I was really curious about a number of things from an earlier statement:
“Go has clearly benefited from “ignoring type theory research”.
Some things that came to mind when I read that and I wondered “what does this mean?”
“clearly benefited”
Hmmm, what does benefit mean? Especially in reference to a language. My reading of benefit is that “doing X helped the language designers achieve one or more goals in a way that had acceptable tradeoffs”. However, it was far from clear to me, that is what people meant.
“ignoring type theory research”
ignoring is an interesting term. This could mean many things and I think it has profound implications for the statement. Does ignoring mean ignorance? Does it mean willfully not caring? Or does it mean considered but decided not to use?
I’m familiar with some of the Rob Pike and Go early history comments that you referenced in the other threads. In particular related to the goal of Go being designed for:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.
I haven’t found anything though that shows there was a willful disregard of type theory. I wasn’t attempting to get you to prove a negative, more I’m curious. Has the Go team ever said something that would fall under the heading of “type system theory, bah we don’t need it”. Perhaps they have. And if they have, is there anything that shows a benefit from that.
There’s so much that is loaded into those questions though. So, I’m going to make some statements that are possibly open to being misconstrued about what from your responses, I’m hearing.
“Benefit” here means “helped make popular” because Go on its surface, presents a number of familiar concepts for the programmer to work with. There’s no individual primitive that feels novel or new to most programmers except perhaps the concurrency model. However, upon the first approach that concurrency model is fairly straightforward in what it asks the programmer to grasp when first encountering it. Given Go’s stated goals from the quote above. It allows the programmers to feel productive and “build good software”.
Even as I’m writing that though, I start to take issue with a number of the assumptions that are built into the Pike quote. But that is fine. I think most of it comes down to for me what “good software” is and what “simple” is. And those are further loaded words that can radically change the meaning of a comment based on the reader.
So let me try again:
When people say “Go has clearly benefited from “ignoring type theory research” what they are saying is:
Go’s level of popularity is based, in part, on it providing a set of ideas that should be mostly familiar to programmers who have some experience with the Algol family of languages such as C, C++, Python, Ruby etc. We can further refine that to say that from the Algol family of languages that we are really talking about ones that have type systems that make few if any guarantees (like C). That Go put this familiarity as its primary goal and because of that, is popular.
Would you say that is a reasonable summation?
When I asked:
“Is there documentation that Go developers ignored type research? Has the Go team stated that? Or that they never cared?”
I wasn’t asking you for to prove a negative. I was very curious if any such statements existed. I’ve never seen any. I’ve drawn a number of conclusions about Go based mostly on the Rob Pike quote you provided earlier. I was really looking for “has everyone else as well” or do they know things that I don’t know.
It sounds like we are both mostly operating on the same set of information. That’s fine. We can draw conclusions from that. But I feel at least good in now saying that both you and I are inferring things based on what appears to be mostly shared set of knowledge here and not that I am ignorant of statements made by Go team members.
I wasn’t looking for proof. I was looking for information that might help clear up my ignorance in the area. Related to my ignorance.
I appreciate that you saw I was trying to put effort into it being productive and civil. Those posts took a while. I appreciate your introspective and kind reply, too. Now, let’s see where we’re at with this.
Yeah, it looks like we were using words with a different meaning. I was focused on well-understood by PLT types that design languages and folks studying parallelism. Rob Pike at the least should be in both categories following that research. Most programmers don’t know about it. You’re right that Go could’ve been first time it went mainstream.
You also made a good point that it’s probably overstating it to say they never considered. I have good evidence they avoided almost all of it. Other designers didn’t. Yet, they may have considered it (how much we don’t know), assessed it against their objectives, and decided against all of it. The simplest approach would be to just ask them in a non-confrontational way. The other possibility is to look at each’s work to see if it showed any indication they were considering or using such techniques in other work. If they were absent, saying they didn’t consider it in their next work would be reasonable. Another angle would be to look at, like with C’s developers, whether they had a personal preference for simpler or barely any typing consistently avoiding developments in type systems. Since that’s lots of work, I’ll leave it at “Unknown” for now.
Regarding its popularity, I’ll start by saying I agree its simple design reusing existing concepts was a huge element of that. It was Wirth’s philosophy to do same thing for educating programmers. Go adopted that philosophy to modern situation. Smart move. I think you shouldn’t underestimate the fact that Google backed it, though.
There were a lot of interesting languages over the decades with all kinds of good tradeoffs. The ones with major, corporate backing and/or on top of advantageous foundations/ecosystems (eg hardware or OS’s) usually became big in a lasting way. That included COBOL on mainframes, C on cheap hardware spreading with UNIX, Java getting there almost entirely through marketing given its technical failures, .NET/C# forced by Microsoft on its huge ecosystem, Apple pushing Swift, and some smaller ones. Notice the language design is all across the board here in complexity, often more complex than existing languages. The ecosystem drivers, esp marketing or dominant companies, are the consistent thread driving at least these languages’ mass adoption.
Now, mighty Google claims they’re backing for their massive ecosystem a new language. It’s also designed by celebrity researchers/programmers, including one many in C community respect. It might also be a factor in whether developers get a six digit job. These are two, major pulls plus a minor one that each in isolation can draw in developers. Two, esp employment, will automatically make a large number of users if they think Google is serious. Both also have ripple effects where other companies will copy what big company is doing to not get left behind. Makes the pull larger.
So, as I think of your question, I have that in the back of my mind. I mean, those effects pull so hard that Google’s language could be a total piece of garbage and still have 50,000-100,000 developers just going for a gold rush. I think that they simplified the design to make it super-easy to learn and maintain existing code just turbocharges that effect. Yes, I think the design and its designers could lead to significant community without Google. I’m just leaning toward it being a major employer with celebrity designers and fanfare causing most of it.
And then those other languages start getting uptake despite advanced features or learning troubles (esp Rust). Shows they Go team could’ve done better on typing using such techniques if they wanted to and/or knew about those techniques. I said that’s unknown. Go might be the best they could do in their background, constraints, goals, or whatever. Good that at least four, different groups made languages to push programming further into the 90’s and 2000’s instead of just 70’s to early 80’s. There’s at least three creating languages closer to C generating a lot of excitement. C++ is also getting updates making it more like Ada. Non-mainstream languages like Ada/SPARK and Pony are still getting uptake even though smaller.
If anything, the choices of systems-type languages is exploding right now with something for everyone. The decisions of Go’s language authors aren’t even worth worrying about since that time can be put into more appropriate tools. I’m still going to point out that Rob Pike quote to people to show they had very, very specific goals which made a language design that may or may not be ideal for a given task. It’s good for perspective. I don’t know designers’ studies, their tradeoffs, and (given alternatives) they barely matter past personal curiosity and PLT history. That also means I’ll remain too willfully ignorant about it to clear up anyone’s ignorance. At least till I see some submissions with them talking about it. :)
Thanks for the time you put into this @nickpsecurity.
Sure thing. I appreciate you patiently putting time into helping me be more accurate and fair describing Go designers’ work.
And thank you, I have a different perspective on Go now than I did before. Or rather, I have a better understanding of other perspectives.
I don’t have a good understanding of type systems. What is it that Rich misses about Haskells Maybe? Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?
Does changing the return type of a function from Maybe T to T not mean that you have to change code which uses the return value of that function?
It does in a way, but I think people sometimes over-estimate the amount of changes that are required. It depends on whether or not really really care about the returned value. Let’s look at a couple of examples:
First, let’s look at an example. Let’s say that we had a function that was going to get the first element out of a list, so we start out with something like:
getFirstElem :: [a] -> a
getFirstElem = head
Now, we’ll write a couple of functions that make use of this function. Afterwards, I’ll change my getFirstElem
function to return a Maybe a
so you can see when, why, and how these specific functions need to change.
First, let’s imagine that I have some list of lists, and I’d like to just return a single list that has the first element; for example I might have something like ["foo","bar","baz"]
and I want to get back "fbb"
. I can do this by calling map
over my list of lists with my getFirstElem
function:
getFirsts :: [[a]] -> [a]
getFirsts = map getFirstElem
Next, say we wanted to get an idea of how many elements we were removing from our list of lists. For example, in our case of ["foo","bar","baz"] -> "fbb"
, we’re going from a total of 9 elements down to 3, so we’ve eliminated 6 elements. We can write a function to help us figure out how many elements we’ve dropped pretty easily by looking at the sum of the lengths of the lists in the input lists, and the overall length of the output list.
countDropped :: [[a]] -> [b] -> Int
countDropped a b =
let a' = sum $ map length a
b' = length b
in a' - b'
Finally, we probably want to print out our string, so we’ll use print
:
printFirsts =
let l = ["foo","bar","baz"]
r = getFirsts l
d = countDropped l r
in print l >> print r >> print d
Later, if we decide that we want to change our program to look at ["foo","","bar","","baz"]
. We’ll see our program crashes! Oh no! the problem is that head
doesn’t work with an empty list, so we better go and update it. We’ll have it return a Maybe a
so that we can capture the case where we actually got an empty list.
getFirstElem :: [a] -> Maybe a
getFirstElem = listToMaybe
Now we’ve changed our program so that the type system will explicitly tell us whether we tried to take the head of an empty list or not- and it won’t crash if we pass one in. So what refactoring do we have to do to our program?
Let’s walk back through our functions one-by-one. Our getFirsts
function had the type [[a]] -> [a]
and we’ll need to change that to [[a]] -> [Maybe a]
now. What about the code?
If we look at the type of map
we’ll see that it has the type: map :: (c -> d) -> [c] -> [d]
. Since both [[a]] -> [a]
and [[a]] -> [Maybe a]
satisfy the constraint [a] -> [b]
, (in both cases, c ~ [a]
, in the first case, d ~ a
and in the second d ~ Maybe a
). In short, we had to fix our type signature, but nothing in our code has to change at all.
What about countDropped
? Even though our types changed, we don’t have to change anything in countDropped
at all! Why? Because countDropped
is never looking at any values inside of the list- it only cares about the structure of the lists (in this case, how many elements they have).
Finally, we’ll need to update printFirsts
. The type signature here doesn’t need to change, but we might want to change the way that we’re printing out our values. Technically we can print
a Maybe
value, but we’d end up with something like: [Maybe 'f',Nothing,Maybe 'b',Nothing,Maybe 'b']
, which isn’t particularly readable. Let’s update it to replace Nothing
values with spaces:
printFirsts :: IO ()
printFirsts =
let l = ["foo","","bar","","baz"]
r = map (fromMaybe ' ') $ getFirsts' l
d = countDropped l r
in print l >> print r >> print d
In short, from this example, you can see that we can refactor our code to change the type, and in most cases the only code that needs to change is code that cares about the value that we’ve changed. In an untyped language you’d expect to still have to change the code that cares about the values you’re passing around, so the only additional changes that we’ve had to do here was a very small update to the type signature (but not the implementation) of one function. In fact, if I’d let the type be inferred (or written a much more general function) I wouldn’t have had to even do that.
There’s an impression that the types in Haskell require you to do a lot of extra work when refactoring, but in practice the changes you are making aren’t materially more or different than the ones you’d make in an untyped language- it’s just that the compiler will tell you about the changes you need to make, so you don’t need to find them through unit tests or program crashes.
countDropped
should be changed. To what will depend on your specification but as a simple inspection, countDropped ["", "", "", ""] [None, None, None, None]
will return -4, which isn’t likely to be what you want.
That’s correct in a manner of speaking, since we’re essentially computing the difference between the number of characters in all of the substrings minutes the length of the printed items. Since [""] = [[]]
, but is printed " "
, we print one extra character (the space) compared to the total length of the string, so a negative “dropped” value is sensible.
Of course the entire thing was a completely contrived example I came up with while I was sitting at work trying to get through my morning coffee, and really only served to show “sometimes we don’t need to change the types at all”, so I’m not terribly worried about the semantics of the specification. You’re welcome to propose any other more sensible alternative you’d like.
That’s correct in a manner of speaking, since …
This is an impressive contortion, on par with corporate legalese, but your post-hoc justification is undermined by the fact that you didn’t know this was the behavior of your function until I pointed it out.
Of course the entire thing was a completely contrived example …
On this, we can agree. You created a function whose definition would still typecheck after the change, without addressing the changed behavior, nor refuting that in the general case, Maybe T is not a supertype of T.
You’re welcome to propose any other more sensible alternative you’d like.
Alternative to what, Maybe? The hour long talk linked here is pretty good. Nullable types are more advantageous, too, like C#’s int?
. The point is that if you have a function and call it as f(0)
when the function requires its first argument, but later, the requirement is “relaxed”, all the places where you wrote f(0)
will still work and behave in exactly the same way.
Getting back to the original question, which was (1) “what is it that Rich Hickey doesn’t understand about types?” and, (2) “does changing the return type from Maybe T to T cause calling code to break?”. The answer to (2) is yes. The answer to (1), given (2), is nothing.
I was actually perfectly aware of the behavior, and I didn’t care because it was just a small toy example. I was just trying to show some examples of when and how you need to change code and/or type signatures, not write some amazing production quality code to drop some things from a list. No idea why you’re trying to be such an ass about it.
She did not address question (1) at all. You are reading her response to question (2) as implying something about (1) that makes your response needlessly adverse.
This is a great example. To further reinforce your point, I feel like the one place Haskell really shows it’s strength in these refactors. It’s often a pain to figure out what the correct types should be parts of your programs, but when you know this and make a change, the Haskell compiler becomes this real guiding light when working through a re-factor.
He explicitly makes the point that “strengthening a promise”, that is from “I might give you a T” to “I’ll definitely give you a T” shouldn’t necessarily be a breaking change, but is in the absence of union types.
Half baked thought here that I’m just airing to ask for an opinion on:
Say as an alternative, the producer produces Either (forall a. a) T
instead of Maybe T
, and the consumer consumes Either x T
. Then the producer’s author changes it to make a stronger promise by changing it to produce Either Void T
instead.
I think this does what I would want? This change hasn’t broken the consumer because x
would match either alternative. The producer has strengthened the promise it makes because now it promises not to produce a Left
constructor.
When the problem is “I can’t change my mind after I had insufficient forethought”, requiring additional forethought is not a solution.
So we’d need a way to automatically rewrite Maybe t
to Either (forall a. a) t
everywhere - after the fact. ;)
Likewise, I wonder what he thinks about Rust’s type system to ensure temporal safety without a GC. Is safe, no-GC operation in general or for performance-critical modules desirable for Clojure practitioners? Would they like a compile to native option that integrates that safe, optimized code with the rest of their app? And if not affine types, what’s his solution that doesn’t involve runtime checks that degrade performance?
I’d argue that GC is a perfectly fine solution in vast majority of cases. The overhead from advanced GC systems like the one on the JVM is becoming incredibly small. So, the scenarios where you can’t afford GC are niche in my opinion. If you are in such a situation, then types do seem like a reasonable way to approach the problem.
I have worked professionally in Clojure but I have never had to make a performance critical application with it. The high performance code I have written has been in C and CUDA. I have been learning Rust in my spare time.
I argue that both Clojure and Rust both have thread safe memory abstractions, but Clojure’s solution has more (theoretical) overhead. This is because while Rust uses ownership and affine types, Clojure uses immutable data structures.
In particular, get/insert/remove for a Rust HashMap
is O(1) amortized while Clojure’s corresponding hash-map
’s complexity is O(log_32(n)) for those operations.
I haven’t made careful benchmarks to see how this scaling difference plays out in the real world, however.
Having used clojure’s various “thread safe memory abstractions” I would say that the overhead is actual not theoretical.
Disclaimer: I <3 types a lot, Purescript is lovely and whatnot
I dunno, I kinda disagree about this. Even in the research languages, people are opting for nominal ADTs. Typescript is the exception, not the rule.
His wants in this space almost require “everything is a dictionary/hashmap”, and I don’t think the research in type theory is tackling his complaints (the whole “place-oriented programming” stuff and positional argument difficulties ring extremely true). M…aybe row types, but row types are not easy to use compared to the transparent and simple Typescript model in my opinion.
Row types help o solve issues generated in the ADT universe, but you still have the nominal typing problem which is his other thing.
His last keynote was very agressive and I think people wrote it off because it felt almost ignorant, but I think this keynote is extremely on point once he gets beyond the maybe railing in the intro
“When pursuing a vertical scaling strategy, you will eventually run up against limits. You won’t be able to add more memory, add more disk, add more “something.” When that day comes, you’ll need to find a way to scale your application horizontally to address the problem.”
I should note the limit last I checked was SGI UV’s for data-intensive apps (eg SAP Hana) having 64 sockets with multicore Xeons, 64TB RAM, and 500ns max latency all-to-all communication. I swore one had 256 sockets but maybe misremembering. Most NUMA’s also have high-availability features (eg “RAS”). So, if it’s one application per server (i.e. just DB), many businesses might never run into a limit on these things. The main limit I saw studying NUMA machines was price: scaling up cost a fortune compared to scaling out. One can get stuff in low-to-mid, five digits now that previously cost six to seven. Future-proofing scale up by starting with SGI-style servers has to be more expensive, though, than scale-out start even if scale-out starts on a beefy machine.
You really should modify the article to bring pricing up. The high price of NUMA machines was literally the reason for inventing Beowful clusters which pushed a lot of the “spread it out on many machines” philosophy towards mainstream. The early companies selling them always showed the price of eg a 64-256 core machine by Sun/SGI/Cray vs cluster of 2-4 core boxes. First was price of mini-mansion (or castle if clustered NUMA’s) with second ranging from new car to middle-class house. HA clustering goes back further with VMS, NonStop, and mainframe stuff. I’m not sure if cost pushed horizontal scaling for fault-tolerance to get away from them or if folks were just building on popular ecosystems. Probably a mix but I got no data.
“The number of replicas, also known as “the replication factor,” allows us to survive the loss of some members of the system (usually referred to as a “cluster”). “
I’ll add that each could experience the same failure, esp if we’re talking attacks. Happened to me in a triple, modular redundancy setup with a single component faulty. On top of replication, I push hardware/software diversity much as one’s resources allow. CPU’s built on different tools/nodes. Different mobo’s and UPS’s. Maybe optical connections if worried about electrical stuff. Different OS’s. Different libraries if they perform identical function. Different compilers. And so on. The thing that’s the same on each node is one app you’re wanting to work. Even it might be several written by different people with cluster having a mix of them. The one thing that has to be shared is the protocol for starting it all up, syncing the state, and recovering from problems. Critical layers like that should get the strongest verification the team can afford with SQLite and FoundationDB being the exemplars in that area.
Then, it’s really replicated in a fault-isolating way. It’s also got a lot of extra failure modes one has to test for. Good news is several companies and/or volunteers can chip in each working on one of the 3+ hardware/software systems. Split the cost up. :)
Those are planned subjects for future posts. I wanted to keep this one fairly simplistic for folks who aren’t experts in the area.
I don’t think we need to have threads about every minor bugfix release of Pony (or indeed, any software).
I think we have a voting system for a reason. If people didn’t care, this would have dropped off the front page, but since this is currently the #2 story, clearly some people, myself included, enjoy this type of content. If you don’t want to see a story, then hide it.
You can apply “hide it if you don’t like it” to any story; it’s a bit of a discussion-stopper. One of the values in Lobsters is that it’s got interesting stories with a fairly good signal/noise ratio, and IMHO this isn’t very good signal/noise considering it’s a pretty minor release.
It has a pretty interesting “coming up” section.
I like these posts and would like to see them.
I appreciate your position though and wonder if better filtering (e.g. “pony + release”) would help.
The end game being a special release tag for each esoteric language and personal project people post so often that others regularly call it out?
People here work on really interesting things, major updates to those are often interesting and elicit good comments. Pony is interesting, and I actually think the Pony people are hurting their cause when instead of an on-topic discussion the responses to their posts end up being “please stop spamming this” or “can we have a special filter for this?”
I’m not “Pony people”? Like, far from it? I’m just very interested in everything proglang.
But the discussion about granularity comes up again and again and I decided to take you example and see if we can make the platform better. I’m confused by your strong reaction, so I’ll let that rest here.
And no, the end game would not be that everything gets a tag, the end game would be that everything that has notable coverage here can get a tag here, so that people can filter.
As a (primary) developer of Monte, another language in the same niche as Pony, I get two boons out of this.
The first boon is the one that I reap when I actually click the link. I get to learn what, if any, interesting things have happened in the Pony world. Today, nothing interesting, but that’s okay.
The other boon is that I get a contrast, simply by not posting regular updates about Monte and giving it a version number, with Pony. This contrast helps widen the niche for capability-aware programming languages, and gives nuance and possibility to the ways in which both Monte and Pony could be used.
I do wonder about the degree to which this posting is a commercial advertisement, but I don’t feel that it is especially advertising.
Today I learned about Monte. Nice. I’m glad you posted it.