So many language specific package managers. Still looking for a good one that works across languages. Is this not possible?
Nix with the tooling in nixpkgs fills this need for me, but requires a certain amount of buy-in.
Perhaps there is a better way.
This is possible if you have a build system that works for all languages, Buck and Bazel are the best candidates for such a build system imo.
I guess by now it’s useless to complain about how confusing it is that OCaml has two (three?) “standard” package managers; the ecosystem around the language is kind of infamous for having at least two of everything. I trust the community will eventually settle on the one that works the best. At least it looks like esy is compatible with opam libraries (though the reverse is not true), so it might have a good chance against opam.
Also this is kind of unrelated, but I’m really salty about ReasonML recommending JS’s camelCase over OCaml’s snake_case. This is one of the few rifts in the ecosystem that can’t really be fixed with time, and now every library that wants to play well with both OCaml and Reason/BS ecosystems will have to export an interface in snake_case and one in camelCase.
I second the choice to use JS’s camelCase for ReasonML as a salty/trigger point. It seems like a minor syntactic thing to make it more familiar for JS developers making the switch, but as someone who primarily writes Haskell for day job - camelCase is just less readable, IMO. Something I constantly am irritated that I even have to think about is casing acronyms consistently - which is avoided by snake_case or spinal-case - ie. runAWSCommand or runAwsCommand, setHTMLElement vs setHtmlElement - run_aws_command, set_html_element, etc.
The strangest thing for me is the “hey, there’s two mostly compatible syntaxes for this language we call ReasonML” but it’s mostly the same thing as Bucklescript from which we use the compiler anyway, except this, and this, and … oh and by the way, it’s all ocaml inside. What ?!
“Oh and also the docs for all these things (which you need) are all in completely different places and formats”
I think the ReasonML team wanted to match the conventions of JavaScript, where camel case is the norm.
I can see the annoyance though… and I have to wonder, is ReasonML syntax much better than OCaml’s? Was it really worth the break?
It’s not “better.” Yes, there are some cases where they’ve patched up some syntactic oddities in OCaml, but it’s mostly just change for the sake of being near JS.
Is it worth it? Depends. ReasonML and its team believe that OCaml failed to catch on because of syntax. If you agree, then yes, it’s worth it. And based on the meteoric rise I’ve seen on ReasonML, they may be right. That said, I believe, with good company, think OCaml didn’t catch on because it had two of everything, had really wonky package managers (and again, two of them), and still lacks a good multithreading story. In that case, no, the syntax just is change for no reason, and the only reason ReasonML is successful is because Facebook is involved.
I’m all for functional alternatives displacing JavaScript but my main frustration with ReasonML is that any niceities you gain from using it are outweighed by the fact that it’s just one more layer on top of an already complex, crufty, and idiosyncratic dev environment. I think that’s what’s holding OCaml back as much as anything else.
Some people seem to think that OCaml’s syntax is really ugly (I quite like it) and unreadable. I’m guessing they’re the same who complain about lisps having too many parenthesis.
ReasonML does fix a few pain points with OCaml’s syntax, mostly related semicolons (here, here, here), and supports JSX, but it also introduces some confusion with function call, variant constructor and tuple syntax (here, here, here) so it’s not really a net win IMO.
I think ReasonML was more of a rebranding effort than a solution to actual problems, and honestly it’s not even that bad if you disregard the casing. Dune picks up ReasonML files completely transparently so you can have a project with some files in ReasonML syntax and the rest in OCaml syntax. The only net negative part is the casing.
Esy and bsb are build orchestration tools, not package managers.
Esy is not OCaml-specific, it can e.g. include C++ projects as build dependencies. This is how Revery ( https://github.com/revery-ui/revery ) is being developed, for example. Esy also solves the problem of having to set up switches and pins for every project, with commensurate redundant rebuilds of everything. Instead, it maintains a warm build cache across all your projects.
Bsb specifically supports BuckleScript and lets it use npm packages. It effectively opens up the npm ecosystem to BuckleScript developers, something other OCaml tools don’t do (at least not yet).
Having ‘two of everything’ is usually a sign of a growing community, so it’s something I’m personally happy to see.
Re: casing, sure it’s a little annoying but if C/C++ developers can survive mixed-case codebases, hey, so can we.
As someone who uses JavaScript await [1], I like this syntax. I would rather do:
fetch('http://alfa.com/bravo.json').await.json().await;
versus what I currently do:
await (await fetch('http://alfa.com/bravo.json')).json();
[1] https://github.com/cup/umber/blob/master/tv/assets/main.js
Pipe operator fixes this also:
const someJson =
'http://alfa.com/bravo.json'
|> fetch
|> await
|> (x => x.json())
|> await
Why not do the usual, i.e.,
const bravo = await fetch(http://alfa.com/bravo.json');
bravo.json();
?
There’s no ambiguity, no extra parentheses, and no keywords masquerading as member properties.
Have you tried it? That’s not valid syntax inside an async function. Both methods return a promise, not an object.
If you want to debate the specifics of the syntax, we can do that. But my main point is why not stick to the usual const x = await f()
style that we normally use? Everything doesn’t need to get turned into an expression. It’s OK to break things up into multiple bindings.
If you want to debate the specifics of the syntax, we can do that.
Why would I debate someone who hasnt even tested the code they are publicly posting?
But my main point is why not stick to the usual const x = await f() style that we normally use? Everything doesn’t need to get turned into an expression. It’s OK to break things up into multiple bindings.
My actual code is this:
let chan = await (await fetch('/umber/tv/assets/data.json')).json();
a single line of 68 characters. This line presents no hallmarks of needing to be broken up:
so the only real reason to do as you suggest would be to hide the bad language grammar. Id rather not hide it, Id rather put the bad grammar on full display while at the same time praising Rust good grammar, as I did in my original comment. If you wish to hide away bad language decisions thats your choice.
Why would I debate someone who hasnt even tested the code they are publicly posting?
Because this is a discussion forum, not a PR up for review. I’m sure you can find plenty of those on GitHub.
This line presents no hallmarks of needing to be broken up
Well, there is one hallmark that people use to break up async code: to try to make it look more sequential. That’s the whole point of async/await syntax. If you want an expression syntax you can just use .then(...)
:
fetch('/umber/tv/assets/data.json').then(data => data.json()).then(chan =>
...);
This is not that much longer than the await-expression sugared version, doesn’t require changes to the language, doesn’t require learning any new syntax, and as a bonus it even reads left-to-right in terms of the data flow. But it also doesn’t, like your version, do anything to ease understanding of the code in more sequential terms. Which is what every other async-await syntax out there is trying to do. Not ‘hide bad grammar’. But to re-arrange async or effectful code to look more linear so our linear brains can understand it with less effort.
The fact that you would propose a .then solution, coupled with your invalid first example that you never corrected, shows that maybe you dont have a strong grasp on asynchronous programming.
Async/Await was created in large part precisely to avoid constructs like .then. So having chose to use await for this exact reason, why would I now move back to .then?
Oh, I see we’re pulling out the ‘You don’t understand XYZ’ already? In that case, perhaps you don’t have a strong grasp of how a logical argument works? I didn’t propose a .then
solution, I said that the expression-level await
syntax doesn’t offer a significant benefit over the existing .then
solution. If all you’re doing is hiding a few lambdas, what’s the point? Imho it’s simply not a high enough improvement over the incumbent.
And speaking of grasping asynchronous programming, let me ask you one more thing, what are the semantics of the expression you originally posted?
fetch('http://alfa.com/bravo.json').await.json().await;
the expression-level await syntax doesn’t offer a significant benefit over the existing .then solution
…and youre flatly wrong about that. It offers the ability to write async code in a synchronous style. That is to say, it allows you to avoid the nesting required with typical async constructs like .then.
The fact that you have failed to demonstate understanding of this bedrock benefit of Async/Await is why I made my previous comment. A comment which apparently still stands.
it allows you to avoid the nesting required with typical async constructs like .then.
What nesting? The typical use of .then
is:
foo()
.then(bar)
.then(baz)
The fact that you have failed to demonstate [sic] understanding of this bedrock benefit of Async/Await is why I made my previous comment.
Let’s face it, you keep putting me down because you think I don’t understand this like you do, and you get a power trip from hammering people for (what you perceive as) weakness.
You still haven’t answered my question though, what are the actual semantics of the expression-style await syntax sample you posted?
you think I don’t understand this
You dont. If you did you would understand that in an example like this:
fetch('http://nghttp2.org/httpbin/json')
.then(q => q.json())
.then(z => console.log(z.slideshow.author));
The resultant value z doesnt and cant exist outside of the final .then context. That means with large scripts, you have to put the entire rest of the script inside the final .then statement.
If you cant see why that can be problematic and/or undesirable, then I cant help you. With await the result can exist outside of the await call as long as within async function:
let z = await (await fetch('http://nghttp2.org/httpbin/json')).json();
console.log(z.slideshow.author);
what are the actual semantics of the expression-style await syntax sample you posted?
Why would I bother going down this path with you when you dont even understand the underlying syntax of async programming with JavaScript?
That means with large scripts, you have to put the entire rest of the script inside the final .then statement. … With await the result can exist outside of the await call as long as within async function:
Yes, because ‘within the async function’ is the entire rest of the script that you alluded to earlier. That’s how the syntax desugars. If your entire argument is that .then
isn’t ergonomic for larger functions, then we’re already on the same page, my very first reply to you was about using await bindings in a normal style rather than the mish-mash await-expression style that you have here.
Why would I bother going down this path with you when you dont even understand
You’re still assuming I don’t understand syntax after saying things like ‘the final .then statement’? SMH.
mish-mash await-expression style that you have here.
It is a mish-mash because of poor language design.
That is the foundation of my argument, one which you have just agreed to, thanks!
Rust did not make the same mistake, so kudos to them.
The language design may be poor, but that usage of it is not idiomatic. In practice you will see the binding syntax which I have been talking about since the beginning of this thread.
I respect Rust’s decision given their context and constraints. In this thread I was talking about JavaScript syntax because that is what you brought up and that is what I replied to.
I never said that it is idiomatic, and I don’t care that it’s not. The syntax I gave is what should be used for the most basic of examples. Binding is only needed when:
otherwise youre just creating a variable for no reason other than “that’s what I’ve always done”. Sometimes it helps to think critically about what you’re doing instead of just blindly following idioms.
And sometimes it helps to follow idioms which have been set up (usually for good reasons) instead of trying to mix in a new style and confusing people who might look at your code in the future.
It helps to think about why you want to avoid ‘creating a variable’ when it’s already the idiom in the language you’re using. Do you not want a name describing the value? Why not? Do you think it will allocate more memory? It won’t, it will allocate the same.
I made my reasons very clear. I am purposely using the syntax because:
and there it is folks, he finally ran out of on topic arguments. Shame, I was hoping to go for a few more days at least. When youve lost the rational argument, attack the opponent!
Dude, anyone can see that (a) I’m replying to something that’s not in your current comment, and (b) you edited your comment, presumably deleting the part that I did reply to. If Lobsters preserved comment history this would be really hilarious.
Thats true, because its easier for you to comment on off topic matters, than to realize the obviousness of the truth: on topic, you dont have a leg to stand on, and you never did.
I didn’t make this personal. I didn’t repeatedly accuse the other person of not understanding what they’re talking about, I didn’t edit my comments to try to hide what the other person replied to, and I didn’t turn what should essentially be a small difference of opinion in how to express some async code, into a flamefest. So, enjoy your imaginary internet victory! I’m all done here.
No, but what you did do was start a technical discussion with a profound lack of understanding of the topic at hand. Let us review your initial code:
const bravo = await fetch(http://alfa.com/bravo.json');
bravo.json();
Youve got missing open quote, and a final statement thats NOOP as you havent attached it to a variable or a method with agency. On top of that youve demonstrated a lack of knowledge of async control flow as the final statement returns a Promise, not JSON data.
Its hard to take someone seriously when they enter the conversation with ignorance and lack of effort.
Its hard to take someone seriously
No, it’s actually easy to have a technical discussion (on an internet forum, as I pointed out earlier) based on the actual merits of the argument (which you actually did do later on in the thread, no matter how begrudgingly), not on the syntactic nitty-gritty. What you keep doing here–putting down the other person with ‘It’s hard to take you seriously’, ‘You clearly don’t understand’, is the actual ad-hominem attack. Because the alternative would be to simply refute the actual technical arguments (which was plenty clear from the context), not focus on trivial details (again, in a discussion forum, not actual production code) to try to score some easy points.
Edit: also, it’s funny to read someone attacking my ‘ignorance and lack of effort’, and particularly a ‘missing open quote’, with a reply lacking actual possessive quote characters (‘youve’).
As much as you might want me to, I’m not separating your code from your argument. I might do if you have provided a single coherent code block with arguments as secondary, but you have not done that.
A technical discussion of this nature starts with code. If you cant or wont demonstate your competency then I dont really care what your position is. Respect has to be earned, and your initial example is poor and you refused to correct it. If you had provided a working example with a different style i might have looked at it and gone “hey, thats not my style but it does work, maybe the guy has a point”.
You never did that and your reticence lends to that you cant. At this point I wouldnt bother though as the window you had to demonstrate your compentence closed about 5 comments ago.
Oh my god I just realized what you are, you are asking for me to draw 7 perpendicular lines LOL:
https://www.youtube.com/watch?v=BKorP55Aqvg
Essentially you are proposing to have a discussion of which you apparently have no technical knowledge, but then get upset when an idea in your head clashes with reality, because of said lack. Sad.
The style that I referred to is widely used throughout ES6 codebases with async programming, as you yourself admitted by calling your own example un-idiomatic. No one reading this comment thread can be in any doubt as to what I’m talking about, if they take a couple of minutes to look up ‘JavaScript await syntax’. So I have no doubt whatsoever that you knew from the beginning what I was talking about. Your refusal to separate the code from the argument, your insistence on immediately assuming that I don’t know what I’m talking about based on a single code sample, speaks more about your inflexibility than about my expertise.
A technical discussion of this nature starts with code.
A technical discussion can start with technical points, it can start with code, or it can start with a mix of both. One side of the discussion doesn’t get to dictate the terms of the discussion–that’s not how forum discussions work.
But, since you worship code at the expense of all else, I will happily express the code that you originally pointed to ( https://github.com/cup/umber/blob/02dbee8c38d2cee729e1b5c2a85f70ae3e6b57f0/tv/assets/main.js ) in the more idiomatic style I referred to:
async function main() {
const data = await fetch('/umber/tv/assets/data.json');
const chan = await data.json();
chan.forEach(ab => {
...
Note that I used ellipses to indicate the rest of the function despite knowing your preference for working code above all else. Hopefully that doesn’t set off a new wave of criticisms about my lack of knowledge!
Sorry bro, but as I said your window to demonstrate competency is closed. If you had only posted this 13 comments ago it might have been worth reading!
Take care.
Oh, of course, because there’s a time limit on demonstrating ‘competency’ in internet discussions, after which any demonstrations are null and void. I must have forgotten that when they were handing out the Internet Discussion Rules! Cheers.
Its just simple rules of etiquette. Dont waste peoples time.
Instead of acknowledging your mistake and correcting it so that discussion could move forward, you refused and tried to move past it without providing a proper example as a starting point.
I dont like my time wasted, so I wasted yours. Maybe this can be a lesson to you in the future of how to handle technical discussions. Thanks.
People may not like having their time wasted, but they also generally don’t go about calling others incompetent in discussion forums because of it. That’s also etiquette.
Just curious, why not let chan = await fetch('/umber/tv/assets/data.json').then(r => r.json());
?
Not that it matters much, I’m just wondering if you had considered that option and preferred the double await
for some reason.
The whole point of await is so that you can stop using .then, or as least have a more readable alternative. Proper .then syntax would be:
fetch(something).then(q => q.json()).then(THE ENTIRE REST OF YOUR CODE);
await allows you to undo some of the required nesting used with .then. I suppose you could combine them if you wanted to, but do you really want to be making that kind of frankenstein code?
https://developer.mozilla.org/Web/API/WindowOrWorkerGlobalScope/fetch
If you haven’t already read “What Color is Your Function?”, I highly recommend it: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
I wonder why so many languages opted for async/await rather than threads. I understand that granting untrusted code the power to create threads is a risk, so at least in JavaScript’s case it makes some sense. But I find it curious that languages like Go are the exception, not the norm. (My own language also uses threads.)
Rust has threads. The standard library API is right here.
Threads as they currently exist in Rust are just a wrapper on top of POSIX or Win32 threads, which are themselves implemented in the OS kernel. This means spawning a thread is a syscall, parking a thread is a syscall, jumping from one thread to another is a syscall*, and did I mention that the call stack in Rust can’t be resized, so every thread has a couple of pages of overhead? This isn’t a deal breaker if you wrap them in a thread pool library like rayon, but it means you can’t just use OS threads as a replacement for Erlang processes or Goroutines.
Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task. For another thing, that stack size thing bites you again, since it means your green threads all have to have enough stack space to run normal C code, or, alternatively, you switch stack every time you make an FFI call. Rust used to have green threads, but the FFI overhead convinced them to drop it.
So, since green threads aren’t happening, and you can’t spawn enough OS threads to use them as the main task abstraction in a C10K server, Rust went with the annoying leaky zero-overhead abstraction. I don’t really like it, but between the three options, async/await seems like the least bad.
* Win32 user mode threads can allow you to avoid task switching overhead, but the rest of the downsides, especially the stack size one, are still problems.
Great comment! Just want to nitpick about this:
Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task.
Regarding “blocking calls in C land”, using async/await with an event loop is not better than green threads: both will be blocked until C land yields control.
I wonder why so many languages opted for async/await rather than threads
I think you have to understand that this isn’t an either-or question. Rust, of course, has had threads for ages – the async/await RFC revolves around providing an ergonomic means of interacting with Futures, which are of course just another abstraction built on top of the basic thread API.
The better question would be, “What are the ergonomic issues and engineering trade-offs involved in designing threading APIs, and why might abstractions like Futures, and an async/await API, be more appealing for some sorts of use-cases?”
I’m much more of a fan of algebraic effects for this stuff. Multicore OCaml seems to be moving in the right direction here, in a way that can reduce the splitting of the language in two. I would have loved to have seen something like this in Rust, but I can understand that the more pragmatic choice is async/await + futures. We still need to figure out how to make algebraic effects zero cost.
Yeah. The problem is the language needs some sort of programmable sequencing operators built in that async primitives can make use of, while users can write code that is agnostic to them.
One example is how you can write:
map : ('a ~> 'b) ->> 'a list ~> 'b list
which is sugar for:
map : ('a -[e]-> 'b) ->> 'a list -[e]-> 'b list
where e
is a type level set (a row) of effects. That way you can have a map
function that works in pure situations, or for any other combination of effects. Super handy.
There’s certainly value in the greenthread solution, as evidenced by the success Go, but Rust’s approach makes much more control over the execution context possible, and therefore higher performance is possible. To achieve the absolute highest performance you have to minimize synchronization overhead, which means you need to distinguish between synchronous and asynchronous code. “What color is your function” provides an important observation, but we shouldn’t read it as “async functions are fundamentally worse”. It’s a trade-off.
Of course, prior to Rust, async functions didn’t give much (if any) control over the execution context, and so the advantages of async functions over greenthreads were less clear or present.
I’m not 100% sure if that’s a good intuition that I have, but I kinda think that in case of Go, it’s more like “every line is/can be async-await” in it — because of the “green threads” a.k.a. goroutines model (where goroutines are not really your OS’s threads: they’re multiplexed on them — as will happen with async/await functions, IIUC).
I think this is a good thing.
Without this feature, websites will implement the same functionality in JavaScript, which will less performant and harder to block.
We need a name for this pattern around network protocols: “Embrace, Capture, Break away, Lock-in”
Google did this with Google Talk vs XMMP, email (try running your own mailserver), AMP, RSS…
Email is still mostly unmolested if you understand the security and spam context; it’s not that google made it impossible to run your own smtp server, but in order to do so and not get flagged as spam, there are a lot of hoops to jump through. IMHO this is a net benefit, you still have small email providers competing against gmail, but much less spam.
Email is mostly unmolested because it’s decentralized and federated, and a huge amount of communication crosses between the major players in the space. If Google decided they wanted to take their ball and go home, they would be cutting of all of Gmail, Yahoo mail, all corporate mail servers, and many other small domains.
If we want to make other protocols behave similarly, we need to make sure that federation isn’t just an option, but a feature that’s seamless and actively used, and we need a diverse ecosystem around the protocols.
To foster a diverse ecosystem, we need protocols that are simple and easy to implement, so that anyone can sit down for a week in front of a computer and produce a compatible version of the protocol from first-enough principles, and build a cooperating tool, to diffuse the power of big players.
So how do you not get flagged for spam? I want to join you. I run my own e-mail server and have documented the spam issue here:
https://penguindreams.org/blog/how-google-and-microsoft-made-email-unreliable/
The only way to combat Google and Microsoft’s spam filters is sending my e-mail, texting my friend say, “Hey I sent you an e-mail. Make sure it’s not in your spam folder.” Usually if they reply, my e-mail will now get through .. usually. Sometimes it gets dropped again.
I have DKIM, DMARC and SPF all set up correctly. Fuck Gmail and fuck outlook and fuck all the god damn spammers that are making it more difficult for e-mail to just fucking work.
Forgive the basic question: do you have an rDNS entry set for your IP address so a forward-confirmed reverse DNS test passes? I don’t see that mentioned by you in your blog post, though it is mentioned in a quote not specifically referring to your system.
It’s not clear who your hosting provider (ISP) is, though the question you asked them about subnet-level blocking is one you could answer yourself via third-party blacklist provider (SpamCop, Spamhaus, or many others of varying quality) and as a consequence work with them on demonstrable (empirical) sender reputation issues.
Yes I’ve been asked that before and haven’t updated the blog post in a while. I do have reverse DNS records for the single IPv4 and 2 IPv6 addresses attached to the mail server. I didn’t originally, although I don’t think it’s made that big a difference.
I’ve also moved to Vultr, which blocks port 25 by default and requires customers explicitly request to get it unblocked; so hopefully that will avoid the noisy subnet problem so often seen on places like my previous host, Linode.
I think a big factor is mail volume. Google and Microsoft seem to trust servers that produce large volumes of HAM and I know people at MailChimp that tell me how they gradually spin up newer IP blocks by slowly adding traffic to them. My volume is very small. My mastodon instance and confluence install occasionally send out notifications, but for the most part my output volume is pretty small.
Email is inherently hard, especially spam filtering; Google and Microsoft just happen to be the largest email providers, so it appears to be a Google or Microsoft problem, but I don’t think it is.
E-mail was once the pillar of the Internet as a truly distributed, standards-based and non-centralized means to communication with people across the planet.
I think you’re looking through rose-tinted glasses a bit. Back in the day email was also commonly used to send out spam from hijacked computers, which is why many ISPs now block outgoing port 25, and many email servers disallow emails from residential IPs. Clearly that was suboptimal, too.
Distributed and non-centralized systems are an exercise in trade-offs; you can’t just accept anything from anyone, because the assholes will abuse it.
Cheap hosting is very hard to run a mailserver from because the IP you get is almost certainly tainted.
Having valid rDNS, SPF & DMARC records helps.
It’s also not really a Google issue; many non-Google servers are similarly strict these days, for good reasons. It’s just that Google/Gmail is now the largest provider so people blame them for not accepting their badly configured email server and/or widely invalid emails.
I’ve worked a lot with email in the last few years, and I genuinely and deeply believe that at least half of the people working on email software should be legally forbidden from ever programming anything related to email whatsoever.
In other words, Google didn’t have to break email because email has been fundamentally broken since before they launched GMail.
Worse, newer protocols like Matrix and the various relatives of ActivityPub and OStatus don’t fix this problem.
Matrix, ActivityPub and OStatus don’t fix Email? Well it’s almost as if they are trying to solve other problems than internet mail.
You completely and utterly missed the point.
Mastodon, Synapse, and GNU Social all implement a mixture of blacklists, CAPTCHAs, and heuristics to lock out spambots and shitposters. The more popular they get, the more complex their anti-spam measures will have to get. Even though they’re not identical to internet mail (obviously), they still have the same problem with spambots.
Those problems are at least partly self-inflicted. There’s nothing about ActivityPub which requires you to rehost all the public content that shows up. You can host your own local public content, and you can send it to other instances so that their users can see it.
Rehosting publicly gives spammers a very good way to see and measure their reach. They can tell exactly when they’ve been blocked and switch servers. Plus all the legal issues with hosting banned content, etc.
You’re acting as if that ONE problem (abusive use) is THE only problem and the rule and guide with which we should judge protocols.
While a perfectly reasonable technocratic worldview, I think things like usability are also important :)
In general, you’re right. A well-designed system needs to balance a lot of trade-offs. If we were having a different conversation, I’d be talking about usability, or performance, or having a well-chosen set of features that interact with each other well.
But this subthread is about email, and abusive use is the problem that either causes or exacerbates almost every other problem in email. The reason why deploying an email server is such a pain is anti-spam gatekeeping. The reason why email gets delayed and silently swallowed is anti-spam filtering. The reason why email systems are so complicated is that they have to be able to detect spam. Anti-backscatter measures are the reason why email servers are required to synchronously validate the existence of a mailbox for all incoming mail, and this means the sending SMTP server needs to hold open a connection to the recipient while it sifts through its database. The reason ISPs and routers block port 25 by default is an attempt to reduce spam. More than half of all SMTP traffic is spam.
If having lots of little servers is your goal, and you don’t want your new federated protocol to have control under a small number of giant servers, then you do need to solve this problem. Replicate email’s federation method, get emails emergent federation behavior.
XMMP has a lot of legitimate issues. Try setting up a XMMP video chat between a Linux and macOS client. I’d rather lose my left arm than try doing that again.
Desktop Jingle clients never really matured because it wasn’t a popular enough feature to get attention.
These days I expect everyone just uses https://meet.jit.si because it works even with non-XMPP users and no client
I just got jitsi working w/ docker-compose meet.dougandkathy.com – not headache free, but no way I could build it myself
Audio, video and file transfer is still very unreliable on most IM platforms. Every time I want to make audio or video call with someone we had to try multiple applications/services and use the first one that works.
Microsoft Teams does this pretty well, across many platforms. Linux support is (obviously, I guess) still a bit hacky, but apparently is possible to get to work as well.
yes, this is one the design flaws of the C language. The complexity of makefiles isn’t essential complexity.
I don’t really like the idea that every language should have its own build system though. I would like a single package manager that can handle all languages.
Why “overkill”? A small Bazel file for a C project is very readable (perhaps more readable than the equivalent Make even).
Additionally, you do not need to rethink your build-system if the project scales up.
Oh, it’s definitely readable. But the fixed overhead of using Bazel doesn’t seem worth it for small projects.
Do you mean the overhead of launching? installing?
As a big fan of Bazel, I am genuinely curious why it is not more popular. It feels like the problem of build-system design has largely been solved.
Launching; it chews up a lot of RAM. I’m a big fan of Bazel’s language too but the implementation feels a bit heavy to me. Fine for a large project but overkill for something small.
You might find Pants interesting. It is a Blaze clone that is currently being rewritten in Rust. https://github.com/pantsbuild/pants
Oh, cool! So it uses the same language for the BUILD files & can be a drop-in replacement for Bazel?
It uses the same language (Starlark), but sadly it is not a drop-in replacement. There are small differences such as sources
instead of srcs
.
Maybe a shim library could be made?
Probably it’s due to default -Xms
, -Xmx
and similar keys to start jvm instance. As an experiment, it’s possible to change these parameters. However, these problems will not go away easily soon, JVM is still not a good choice for command-line utilities, not sure about GrallVM. Bazel have to use daemon because of slow startup time.
Last time when I tried to use it for small project, I faced that it’s too much opinionated: that dependencies should be copied to your repository, that it should be “monorepo”. Almost no one except Google and Facebook do it that way. Or maybe that was Buck, not Bazel, I tried both at that time and may confuse them. I had to retreat to using CMake, which I hated, but it worked (I had to delete build
directory few times per day).
But now there’s even experimental build rules for CMake integration, it always had http_archive for fetching external dependencies, but genrule still does not support directory as output. Going to try it again some day.
Dependencies can be Git submodules (Buck, Bazel) or whole Git repos (Bazel) or HTTP archives (Bazel) or fetched via a package manager (various tools).
Buck and Bazel do not require a monorepo. In fact, they give more flexibility in module layouts than CMake due to the extra level of indirection (“cells” for Buck, “workspaces” for Bazel).
Buck supports directory output for genrules.
Bazel genrules can have multiple output files, which is almost the same thing.
Author here. Feel free to ask any questions.
carnix
did not work for my personal projects and after failing to fix it, I created crate2nix
.
For me, it is also a personal goal for this year to publish software and/or blog articles. Despite of or because of being an experienced professional software developer, this has been difficult for me: I always think that I should be able to do better. So, publishing this in an early but useful state is already a medium-sized triumph for me :)
Nice work!
The README states that crate2nix only works on Linux. But after replacing nixos-unstable
in the shell.nix
of crate2nix and replacing nixos
in the generated default.nix
file, it seems to work fine in macOS.
It doesn’t seem to work correctly in a directory that is a Cargo workspace.
Keep up the good work!
Thanks a lot!
platform support: nice to know! for the generated files, it still filters all dependencies restricted to a target config to Linux. if you don’t have Linux specific targets, that’s fine, of course. I did this because some obscure fuchsia, windows libs broke my build.
true: I should mention it in the restrictions. it should also be easy to do better.
feel very free to file bugs for dependencies that break the tool.
Any thoughts on using https://github.com/google/cargo-raze and Bazel inside Nix?
I haven’t tried cargo-raze but I looked at the code for inspiration. I decided to use tera
because of that.
I use sandboxed builds. If bazel uses its own caches, I don’t know how that should work. Do you have an idea?
For me, it is important and what I like about nix that I just run nix build
or nixops deploy
and I do not have to install any other dependencies.
Oh, I forgot to mention that the other way around looks quite fancy:
https://www.tweag.io/posts/2018-03-15-bazel-nix.html
Using nix to install some binaries/dependencies from blaze.
That said, I am not sure if there is a proper ecosystem around blaze? E.g. deployment tools integration or similar?
I think it would be a sandbox in a sandbox, which should work fine. The Bazel build would benefit users on Windows (without MinGW)
Bazel has a nixpkg IIRC, so it shouldn’t be too difficult to get started.
Bazel sandbox would also give you a few additional features:
These can accelerate the build process.
I haven’t tried Rust + Bazel + Nix, so was interested if you had!
The sandboxing problem that I had in mind was no network connectivity (except in-built nix fetchers). That would block the features that you mentioned, correct? also how to access even a local cache and update it?
I haven’t really set it up for myself but remote builders look easy to setup. Not sure how hard it is to generalize that to a bigger cluster.
Remote caching should also be possible, e.g. with cachix.
Maybe bazel does it better.
Seems like the biggest (and only legitimate?) complaint is the lack of back-ends, but this can be fixed over time. Would the author come around if more were added?
I dislike this. If you want to transform an array, use map
and Promise.all
. If you want to perform a side-effect, there is the for await
extension.
Java was excluded from consideration because of the requirement of deploying the JVM and associated libraries along with any program to their production servers.
Haha, what a load of crap, just admit you don’t like java. We should be honest about these things.
Please don’t dismiss people by telling them you know their own opinions better than they do or that they’re lying. There’s much better ways to disagree with their decision here.
It’s a case study for promoting Rust…I wouldn’t judge it the same as dismissing somebody’s own blog post or whatever, say.
Sometimes people’s experiences are not representative of reality, and sometimes people lie about their experiences, and sometimes people rewrite their experiences without realizing it, and sometimes people have experiences so limited that their ability to usefully generalize and advise others is called into question.
Just because it’s somebody’s claimed experience doesn’t magically make it somehow unable to be questioned.
And so we’re on the same page…we’re not talking about somebody relating a background of trauma or something: we’re talking about a company (npm) engineering team deciding to redo the same task three times using 1 hour (Node), 2 days (Go), and a week (Rust) and then the RESF using that to shill Rust (glossing over a doubling in the complexity of the npm’s language ecosystem and stack.
If they’d provided real performance numbers, maybe shown how the Rust version vastly outperformed the Node version, then this might be a better article. As it is, anybody with experience reading press releases (rightly) will wonder if the “experiences” are worth learning from.
What exact claim do you find insufficiently backed up? They literally just documented the (their!) process of rewriting a chunk of JS in a different language, and how it went (for them). To get back to the original “page”: The “claim” ac
was attacking was an opinion, a non-falsifiable statement at worst (as I think the conclusion is).
There are no numerical metrics or benchmarks in the article. The service in question was “projected to become a CPU-bound task”, but it’s unclear if that’s a real issue without seeing their math. Other quotes like “the legacy JavaScript implementation of this service in Node.js was in need of a rewrite in any case” hint that this might be a strictly unnecessary rewrite.
Have you not experienced engineers selling a rewrite or new technology because they’re bored and not because it’s necessary?
“Rust Case Study:” “Copyright © 2019 The Rust Project Developers” “https://www.rust-lang.org”
It’s on there specifically to convince people to adopt Rust. Which is fine: they’re promoting their work and how others benefit from it on their project page. (shrugs) It doesn’t bother me as much as friendlysock since I got at least two things out of it:
(a) Javascript developers were able to adopt it. That’s good to know in long term since I’m for minimizing Javascript use in general. Rust apps will also be more secure and maybe predictable. The typical predictions so far were that it would win over only C and C++ types, not Javascript and Go. This is a data point against that expectation for Javascript in an unlikely place.
(b) The Rust apps caused less problems, possibly due to what I just said. Then again, it could be anything since we have no data. Might not even be Rust. Heck, the Go rewrite might have taught them some lesson they reused when making the Rust version. We don’t know.
Now, far as a scientific evaluation, there’s nothing more to go on as friendlysock says. They didn’t even give us numbers. We’d also be interested in what was easy, difficult, and so on at a language level for the work they were doing. What they told us was what everyone writes, including Rust project’s marketing pages. It objectively introduces no new information on a technical level that we can evaluate or build on.
We’d also be interested in what was easy, difficult, and so on at a language level for the work they were doing.
The did mention that the Rust implementation was more difficult, though.
“Therewriteof the authorization service using Node.js tookabout an hour. … The Go rewritetook two days. … The rewrite of the service in Rust did take longer than both the JavaScript version and the Go version: about a week to get up to speed in the language and implement the program. Rust felt like a more difficult programming language to grapple with.”
Almost all things on Rust mention a learning curve with borrow-checker (or language), cargo being great, and a great community. Mentioning that downside is echoing common claims more than anything extra critical at this point. It’s part of the refrain.
You mentioning this did bring another data point I overlooked: they felt up to speed in Rust in about a week. So, Javascript developers both found the language useful and picked it up in a week. Or at least did for that one type of service well enough to falsely think they understood it in general. See how we’re still lacking objective data? We have a nice anecdote, though, which justifies trying to train a crowd of JS developers on Rust on similar or different projects to see what happens.
See how we’re still lacking objective data?
That’s not a surprise to me, since I believe that the programming language field is at least a century away from having the capability of gaining objective data on anything non-trivial. We’re like surgeons ridiculing the guy who washes his hands before operating.
I guess that guy is Rust.
Nobody is selling you anything
Companies don’t just publish information randomly. A white paper is a standard marketing device (https://en.wikipedia.org/wiki/White_paper#In_business-to-business_marketing).
In this case they’re selling:
Exactly, we use whitepapers in precisely this fashion. They are basically executive tl;drs and as such gloss over details.
Perhaps the ‘haha’ didn’t convey my tone right. I meant it as a jab/joke, the type of thing you would say to make fun of your friend…
‘cmmmoooon, my friend… just admit it, you don’t like java … nenornenornenor … lets go play tag’
Those are valid complaints. Deploying a Java service requires deploying at minimum 2 things that must be kept in sync. A JVM and a Java jar.
Because they are separate they bring a number of deployment and operations issues for the life of the service. Rust being a single binary removes those issues entirely.
There are resolutions for all those in Java too but they all require more effort than Rust does.
Of all the other things you have to do to run or build service - that seems like a drop in the bucket of things to care about to me.
How many backwards incompatible changes are the JVM’s making these days anyway?
Quite a few, actually.
I am no JVM expert but in my experience as a Clojure application developer it’s quite common for new Java releases to break Clojure and/or widely-used Clojure libraries. It’s happened several times in the last year. At work, we are currently on an older Java release for this reason. Right now, the latest Java has a serious performance regression for some fairly-common Clojure use cases and folks are being advised not to upgrade.
It’s not so much the JVM’s backward incompatibilities as the JAR’s that are dependent on the latest and greatest JVM, except that one old JAR that only works on some old ancient version.
So you end up running multiple JVM’s across various machines, to make various JAR’s happy. Suddenly you start day dreaming of rust or go built binaries…
But generally I agree, JVM nightmares aren’t that miserable compared to 5,000 external service dependencies you may have to operate just to make X work.
Shouldn’t that be easily automated, either provision the server with a pre-installed JVM (using ansible? etc.) or use Docker and put the JVM in there as is common?
Sure, but that’s no reason to go through the same pain a second time. I put up with many things in the Python packaging world because I already know and use Python, but I am not willing to invest time in learning an ecosystem that exhibits issues anywhere close to that.
But deploying any service is already tracking versions of databases, the state of your app, the code + compatibility with the OS. I don’t think avoiding java solves dependency or state management.
While I think you phrased this strongly, I’m also puzzled by the quote. The legacy application being replaced is a node app. Running a node app requires a VM and standard library just as running a JVM app does. What is the overhead that makes a JVM app unacceptable while a node app is ok? Perhaps it’s language/tooling familiarity? In my experience, all those three-letter J** acronyms can get fairly confusing.
As an aside, I found the following quote refreshingly honest -
“I wouldn’t trust myself to write a C++ HTTP application and expose it to the web”
Maybe you haven’t read the full article? it literally says
Given the criteria that the programming language chosen should be:
● Memory safe
● Compile to a standalone and easily deployable binary
● Consistently outperform JavaScript
of which Java doesn’t fulfill at least one.
@ac’s comment doesn’t indicate that he missed that part of the article. He is challenging the premise of the second bullet point. The statement about Java that he quoted was the implicit justification for that bullet point.
I don’t agree, I think the bullet point was the justification for the statement about Java and not the other way around. At least logically this makes more sense. You do have a point though about the order in the document:)
The community edition. Enterprise edition:
Twitter runs on the community edition for all production services and have impressive numbers. Search for talks by Chris Thalinger on YouTube to learn more.
Yep, it’s still not “it’s free”, but “it has free variant”. Oracle is no stranger to changing the licensing, as they have recently shown with the mainline oracle JVM.
Not wanting to deal with oracle stuff at all is a reasonable stance.
Fair enough, though in this same vein you could argue that C++ is a safe programming language, if you avoid the “thorns”;)
It’s a real issue. I stumbled across this “warp” project that makes standalone app bundles for the Buck build system and Buckaroo package manager, which are written in Java.
https://github.com/dgiagio/warp
If it weren’t an issue, then they wouldn’t have done this weird dance. I also took care to make a Python app bundle for Oil for related reasons.
We still use Warp, even though Buckaroo is now AOT-compiled F#. The reason is that we have dynamic libraries but still want users to be able to wget the tool for quick installs.
It’s a legitimate concern. Having the ability to deploy a single binary easily from CI or the like can greatly reduce ops headaches (Python is a bit of a mess to deploy because it’s hard to do this).
You could, as a prototype, upload a bunch of c++ source code in npm packages right now and nobody would stop you. (You might stop yourself.)
Where I was going with that is that, if you wanted to make a prototype right now, it’s a possibility.
There is a project which does roughly as you say. react-native libraries distribute obj-C + java code with some conventions for how to build them, inside npm packages.
This article is full of factual errors. It confuses source files with translation units, seems to think that there are always a foo.h/foo.{c,cpp} duality (the implementation of a header could be spread amongst several source files), and keeps talking about linking to .cpp files when one links object files instead.
unlike most languages, C++ splits code into headers and translation-units
No it doesn’t. It splits code into headers and implementation files. Translation units are (roughly) the result of running the preprocessor on a source file.
Headers are not necessarily evil.
Yes, they are. The only reason they exist is because when C was invented computers didn’t have enough RAM to compile a whole program.
We can query the compiler for actual header-usage.
That’s what ninja does. It’s not new.
I don’t use C++ much these days, but if I did I’d be wary of using a package manager that doesn’t understand how C++ is built or what translation units are.
It confuses source files with translation units
They could be clearer, yes.
seems to think that there are always a foo.h/foo.{c,cpp} duality
The article did not read that way to me.
keeps talking about linking to .cpp files when one links object files instead.
To me it was implied that you must create object files first.
That’s what ninja does. It’s not new.
It doesn’t claim to be new, but IIRC Ninja does not use -M
in the way described in the article.
The article suggests using -M
to verify that only explicitly depended upon header-files are included, resulting in fewer undefined reference errors.
They could be clearer, yes.
Not clearer; they could be correct.
To me it was implied that you must create object files first.
How? This is a quote: “Undefined references occur when you depend on a header, but not on the corresponding translation-unit(s).”
It doesn’t claim to be new, but IIRC Ninja does not use -M in the way described in the article. The article suggests using -M to verify that only explicitly depended upon header-files are included, resulting in fewer undefined reference errors.
Including a header doesn’t result in undefined reference errors unless:
https://buckaroo.pm/posts/a-response-to-accio-dependency-manager A Response to “Accio Dependency Manager”
You have to drink a lot of typed Kool-Aid to consider this an acceptable way to program:
string msg = sqrt(-1)
.leftMap([](auto msg) {
return "error occurred: " + msg;
})
.rightMap([](auto result) {
return "sqrt(x) = " + to_string(result);
})
.join();
It looks much nicer with the Coroutine TS. Hopefully something similar (but better designed) will land in C++ proper.
In the mean-time, you can access the values in an unsafe manner using .isLeft()
, .getLeft()
etc.
I love functional programming and this is abhorrent. Although, to its credit, C++ has never been big on esthetics.
I agree. I’m still in the process of re-learning the various C++ standards, and this code is practically unreadable to me. I would much prefer something I can read and mentally step through.
Even in languages with more concise syntax that’s a braindead way of programming if you ask me.
string msg = sqrt(-1).leftMap(msg => 'error occurred: #{msg}')
.rightMap(result => 'sqrt(x) = #{result}')
.join();
In C++ it’s abhorrent. Not to mention that leftMap
and rightMap
should be left_map
and right_map
in C++. Using camelCase in C++ is like using lowercase_with_underscores in Java. It’s just wrong.
In the context or error-handling, an Either allows you to do Either<ExceptionT, ResultT>
, whereas std:optional
only allows you to do std::optional<ResultT>
. This means that std::optional
does not allow you to know why something failed.
If there is only one way in which something can fail (example might be a map look-up) than std::optional
is sufficient.
Whew, that new format is repetitive:
targets = [ "//:satori" ]
[[dependency]]
package = "github.com/buckaroo-pm/google-googletest"
version = "branch=master"
private = true
[[dependency]]
package = "github.com/buckaroo-pm/libuv"
version = "branch=v1.x"
[[dependency]]
package = "github.com/buckaroo-pm/madler-zlib"
version = "branch=master"
[[dependency]]
package = "github.com/buckaroo-pm/nodejs-http-parser"
version = "branch=master"
[[dependency]]
package = "github.com/loopperfect/neither"
version = "branch=master"
[[dependency]]
package = "github.com/loopperfect/r3"
version = "branch=master"
How about a simple .ini?
name = satori
[deps]
libuv/libuv = 1.11.0
google/gtest = 1.8.0
nodejs/http-parser = 2.7.1
madler/zlib = 1.2.11
loopperfect/neither = 0.4.0
loopperfect/r3r = 2.0.0
[deps.private]
buckaroo-pm/google-googletest = 1.8.0
Toml can be written densely too, e.g. (taken from Amethyst’s cargo.toml):
[dependencies]
nalgebra = { version = "0.17", features = ["serde-serialize", "mint"] }
approx = "0.3"
amethyst_error = { path = "../amethyst_error", version = "0.1.0" }
fnv = "1"
hibitset = { version = "0.5.2", features = ["parallel"] }
log = "0.4.6"
rayon = "1.0.2"
serde = { version = "1", features = ["derive"] }
shred = { version = "0.7" }
specs = { version = "0.14", features = ["common"] }
specs-hierarchy = { version = "0.3" }
shrev = "1.0"
TOML certainly is repetitive. YAML, since it hasn’t come up yet, includes standardized comments, hierarchy, arrays, and hashes.
---
# Config example
name: satori
dependencies:
libuv/libuv: 1.11.0
google/gtest: 1.8.0
nodejs/http-parser: 2.7.1
madler/zlib: 1.2.11
loopperfect/neither: 0.4.0
loopperfect/r3: 2.0.0
More standards! xkcd 792. I’m all for people using whatever structured format they like. The trouble is in the edges and in the attacks. CSV parsers are often implemented incorrectly and explode on complex quoting situations (the CSV parser in ruby is broken). And XML & JSON parsers are a popular vectors for attacks. TOML isn’t new of course, but it does seem to be lesser used. I wish it luck in its ongoing trial by fire.
More attributes are to come. For example, groups
:
[[dependency]]
package = "github.com/buckaroo-pm/google-googletest"
version = "branch=master"
private = true
groups = [ "dev" ]
Makes sense, I don’t see an obvious way to encode that in the ini without repeating the names of deps in different sections.
I don’t buy most of his point.
It is still safer to be in a plane than driving to the airport.
From what I have seen, people are still capable of optimising software for performance when they need to, they just rarely do.
Software is complex. It is the most complex thing civilization has built so you can expect them to break the most often.
Some do. We only need so many of them the same way we only need so many pediatricians in a hospital.
If they had added features instead the software would be failing more often which is what you were complaining about 5 minutes ago…
I’m only half way through but really I will summarise the situation the things he considers as good are just not so important to optimise in the current moment. Social issues, climate change, movement towards authoritarianism, AI, etc. these things are more likely to be a collapse event that webdevs not caring that his webshit takes up 5% CPU rather than 1%.
One way to account for climate change is to stop building powerful computers. Make chips cheaper and less polluting to make, concentrate on low energy consumption, and perhaps even if it makes desktop computers 10 times slower than they are right now. In parallel, make software 100 times faster, like it used to be a couple decades ago (we used to think that wasn’t that fast, because computers were much slower). Cut unneeded feature such as fancy GUIs if you have to. It’s only a minor change, but by simplifying everything that way, we’ll make our civilisation a bit more resilient than it currently is.
As for what’s important to optimise… we each have our own skills right now. I’m not sure a web dev can optimise those more pressing issues right now. But they can make the web site faster and less power hungry.
I’ll believe that when I see most people on the Internet talking big about green computing suffering through the use of small computers like Pi’s instead of their nice desktops, laptops, tablets, etc. Plus, exclusively recycling older hardware whenever it’s available. Most refuse to do those things citing some real or perceived benefits that they want to meet which demand harming the environment. Just like the people and businesses they complain about. They just ignore the environment to optimize different goals and metrics.
Personally, I’m on a recycled Core i7. Needed i7 for current and future fixes for CPU vulnerabilities. Maybe also verification tools that run through GB’s of state. I kept my last laptop, a lightly-used Core Duo (or 2) that Dell made for Linux, for 7-8 years. I also build various appliances out of scrap PC’s. I don’t do much on power usage since I think that problem should be solved at supply, folks in my area won’t do it (“full steam ahead on fake warming!”), and so it wouldn’t make a difference by the numbers. That’s how I’m making my tradeoffs. It’s consistent with the pragmatic environmentalism I preach.
While I’m at it, I encourage more people to make their next computer a recycled, like new, or whatever. Hardware got so fast that today’s stuff ran decent even on an 8 year old laptop. I’m sure whatever is 7 years to 1 day old will be anywhere from OK to great. ;)
Well, if software wasn’t so damn slow, people wouldn’t suffer on the Pi. And I don’t think buying a Pi will make the problem solve itself. It has to happen en masse.
That said, my laptop is 3 years old, my desktop is over 10 years old, and my palmtop (Nexus 5) is about 6 years old. I think I’m not doing too badly.
Sounds like you’re doing pretty good. :)
Far as en masse, that’s the reason I call these discussions virtue signaling or at least of no actual value. Stopping the problem would instead require campaigns to create mass change that adapted continually to their audiences, lots of product development to deliver better stuff to apathetic folks, and political campaigns pushing people and/or regulations that forced the matter. This would happen across the world.
I think human nature will defeat itself on this one. So, I plan for climate change instead of trying to stop it. I try to reduce energy use and waste for other reasons. China’s new policy on waste just reinforced the importance of decreasing it.
So your argument that to stop a collapse event we need to start what looks like the beginning of a collapse event (computers are slower, energy consumption is reduced etc.)? In that case we might as well pedal to the metal and wait for a ‘natural’ collapse instead.
When my car approaches an obstacle, I prefer to apply the brakes rather than wait for the ‘natural’ outcome.
In both cases, you are stopped. So the obstacle has served its purpose.
Extending this analogy furthur breaks down the analogy.
Rate of change, in both cases, is the difference. One is comfortable; the other, lethal.
growth != progress. Growth for the sake of growth is a tumor and there’s nothing wrong with the stop of growth for the sake of growth.
If you think humans have intrinsic value, how can more humans be bad? Unless you think there is an inflection point whereby every single human born is actually better dead than alive?
no, I don’t believe humans have intrinsic value. I don’t believe anything has intrinsic value. Humans give value. That said, I was talking about economical growth, not quantity of humans.
So it looks like a collapse is inevitable. I was proposing we accompany it. Another way to accompany the collapse would be to stop planned obsolescence. That alone should cause a noticeable recession, though if done correctly should not worsen our lives in practice (well, except the likes of Apple). We could also slow down the collapse, by building more (fission) nuclear plants. They’re damn expensive, but they will last longer than oil. 100% renewable energy is obviously the future, but that’s likely also a future with less, probably much less energy than what we currently have.
More powerful (and power hungry) chips may on balance save energy because we can solve optimization problems in logistics, manufacturing, etc.
Possibly. But this would only mean a small fraction of all chips: those that are used in factories transport companies, or anything that could save energy with more computation.
i see your point but just wanted to point out that those things aren’t separate. bloated, opaque software has implications for climate change and authoritarianism.