This just seems like twitter flamebait.
Maybe there’s a seed of a future good article in there? I’d be interested to see an in-depth article about pros versus cons of “regular” vs “pair programming” of someone who has worked extensively in both, or a data-driven analysis of how long PRs take to be approved in repos.
It seems like an unnecessary attack on the asyncness of PR:s to promote collaborative coding – a false dilemma.
It assumes:
Which simply isn’t true. Instead:
(And in my opinion: In this day and age of Slack – we need more asyncness, not less)
The weirdness of posting to Twitter with an image of text…
Edit yet again I yearn for a flag reason like “no-content” or “useless” or similar… this is technically on-topic but as shallow as to be useless as a basis for discussion.
Or just disallow Twitter as a submission source…
Want to call an async function blockingly from a sync function? -> Throw it on an executor!
This is colored, no? If it wasn’t you would just call the async function without needing an executor. Cannot you do something like this in every language that has futures-based async?
The key part is blockingly.
You can fire-and-forget async functions in any language with “colors”, but you can’t block synchronously until they finish.
In JS there’s no way to get a result of a Promise
without unwinding your stack first (microtasks are processed only after the entire call stack unwinds, so no return
, no results).
In Rust you can block a thread waiting for anything async that you want. That’s not even a feature of async executors, but merely the fact that you’re in full control OS threads, so you can always build on top of that.
No, it’s typed. Rust, like C# and others, avoid the color mess by just unifying it with the type system. It has special syntax but under the hood it’s all the same. It’s simpler (no separate color checker needed) and more powerful (eg sync manipulation of async functions).
Just because Rust and C# hardcodedasync
/await
into the language doesn’t mean their functions aren’t colored anymore.
Wait, what? It’s much easier than Rust:
// "async" color
async function a() {
return "hello";
}
// not "async" color, but calls an async-color function
function b() {
a().then(greeting => console.log(`${greeting} world`));
}
You can also easily return values calculated in sync functions that depend on data from async functions, because an async function is just syntax sugar for returning a Promise, and so your non-async function can just return a Promise as well:
// "async" color
async function a() {
return 1;
}
// not "async" color
function b() {
return a().then(x => x + 1);
}
// not "async" color
function c() {
b().then(val => console.log(val));
}
c();
// prints 2
Or, since await
is just syntax sugar for waiting for a Promise, you could even mix and match, e.g. c
could be rewritten as:
async function c() {
const val = await b();
console.log(val);
}
Even though b
was never explicitly tagged with the async
color marker.
If you’re using promises (via then, etc) the function is async, not sync, no matter which syntax you’re using…
Well, it really depends what you mean by “async”, and I think precision matters, as this topic is trickier in JS than some give it credit for. This might be pedantic on my part, but just to be super clear here, a function “is async” in JS when it uses the “async” keyword – that’s it. The only thing that the async
keyword does is:
await
and try/catch/finally
as syntactic sugar for .then/.catch/.finally
.As a result, when an async
function is desugared, it decomposes into an outer function which chains together a number of callbacks. This is what makes the async function feel async; any one statement within the async function body could belong to a number of different callbacks after desugaring.
If you’re using promises (via then, etc) the function is async, not sync, no matter which syntax you’re using…
I don’t think “the function” is precise enough to be meaningful because it could mean two things:
To get a Promise in JS, not returning it and neither handling any rejection of it is dangerous, as that will swallow any exceptions thrown within that Promise chain.
It also won’t execute the content of the Promise chain synchronously, so any code depending on that execution won’t be able to wait for it to be done.
This is a real issue in JS that’s been discussed in the context of eg CLI tools, where a sync execution can be preferable from a simplicity view point.
Quick glance tells me Chrome is the browser of choice, given all is green (to a point where I thought the author is a Google employee). But no, that doesn’t seem to be the case when I drill down what the “harmful” means.
IMO, the page isn’t very useful without a TL;DR context at the top… if I have to click on 3rd party articles or drill down to the details, then the purpose of the landing page is questionable. My recommendation would be clarify what “Harmful”, “Shipped” and “Community Draft” means at the top, and then it will make much sense. :-)
Without more context it’s difficult at a glance to know how to interpret “Harmful”.
It looks like it’s saying “Mozilla’s implementation of the Serial API is harmful” but it sounds like what it’s actually saying is “Mozilla considers the Serial API to be harmful” which is very different!
Hmm, yeah, good point. The wording is taken from their own site which is linked to when one clicks on the status.
Suggestion on how it could be improved?
Personally I have no idea what this website is about. Perhaps add a few lines on top that explain what it is?
Further down I’ve written:
observations of APIs with controversy around them and where hard facts has often been hard to find
Maybe replacing/extending the current “Background” in the top with something similar? Maybe like this:
Gathering of Web API specifications that has caused controversy among browser vendors, giving them relevant context
I think you need even more context than that. What is Web API? Why is it controversial? The nice thing about the FAQ format is that you can spend the first 1–3 items answering questions like these and anyone who already has this context can just skip over them.
A design note—the text in the FAQ expands to fill the full width of the screen (or at least the 1,280 pixels of my browser window), and there is also no margin between the text and the edge of the screen. Both of these things make the text harder to read. You might consider limiting the width of the text to 800 px or 40 em (very approximate numbers) and, on smaller screens, adding at least 10 px of whitespace on either side.
Suggestion on wording and such is much appreciated, this is just something I threw together quickly in an afternoon to try and gather references in these topics :)
Perhaps you’d consider changing the colour scheme? To me, green = GOOD and red = BAD, which makes it hard to understand what’s actually going on at first glance.
In what way? Green = positive about the state of the spec, Red = negative about the state of the spec, isn’t that the correct way?
I think the problem is that it’s not immediately obvious that this ‘judgement’ of good vs bad is about the spec. At first glance this just looks like chrome has everything green and is thus good, while firefox/safari have everything red and are thus bad.
“Harmful to Users” or “Deemed Harmful to Users” perhaps? The key point that needs communicating is that Mozilla has determined that implementing the spec would be harmful to its own users, e.g. someone might use the serial api to modify their insulin delivery device.
Well, Mozilla’s own description of their “Harmful” label is “Mozilla considers this specification to be harmful in its current state.”
That the focus is on the spec, not the implementation should get clarified
Totally, but that’s better explained by the ones considering it to be harmful than for me to try and summarize and maybe misinterpret
By using the single word “Harmful” I’d argue that you have summarized. It’s just that that summary is ambiguous and prone to misinterpretation, as others in this thread have pointed out.
Maybe “Mozilla considers it harmful” or “No plans to implement”? I know these are more wordy than what you’ve got now, but I can’t think of a shorter bit of text that still conveys the right meaning.
Added an issue for it to ensure it doesn’t get lost: https://github.com/voxpelli/webapicontroversy.com/issues/1
They often haven’t rejected the specs though, rather they have found that they in their current state would be harmful to the web.
Remember: All of these specs are drafts and still under discussion, even though Chrome has decided to ship them
Yea, I get that, I just don’t feel that “HARMFUL” is representative of what’s going on.
For example, the Safari side of things talks about anti-fingerprinting challenges, which is fair.
I don’t get the sense that the Chrome side is actively trying to enact more ways to fingerprint, but rather they’re trying to build a browser environment that competes with OS functionality, which I think is also fair (ideology aside). I’m not sure how Safari feels about this, given that they’re a purveyor of iOS and macOS and probably don’t love the idea of browsers competing.
Meanwhile I’m not sure what Mozilla’s agenda is. They’re no longer providing Firefox OS, but also it’s not clear that Firefox is interested in pushing browser functionality forward, while also experimenting with ads/sponsored content by default.
My personal bias, as someone who uses Linux and benefits greatly from cross-platform applications like browser apps, is that I like the idea of these additional WebAPIs and it doesn’t sound intractable to make them robust against fingerprinting. The cost of not advertising the functionality by default and even requiring the user to manually enable them seems more than worth it.
My wish for something like the Web API Controversy page (which I appreciate exists as it’s a handy dashboard to keep track of!) is that it didn’t make the premise of the proposals seem nefarious and intractable. :)
I think it would be nice to link to discussions directly in the details, e.g. https://github.com/mozilla/standards-positions/issues/336
I prefer to link to the most official kind of reference and have it refer to the discussions they feel are relevant, feels like that has a better chance of being up to date and staying as objective as possible
So, I’ve been using VSCode for the last uh, couple of years probably. I’ve heard of both of those extensions, but I would in no way describe them as the best parts. The large collection of other extensions and the sheer quality of it (I originally thought of it as “like Atom, except it doesn’t crap itself every couple of days”) is much more what keeps me using it.
The [expletitve] CLA is however much more concerning from my PoV, and would stop me contributing anything to it, but I’m ok with it being slightly proprietary (I’m typing this on a Mac so I’m not exactly on the ideological purity end of things).
In my opinion, as far as CLA goes, Microsoft’s CLA is fine (I even signed it). But I am categorically against CLA, because it is asymmetric, i.e. not inbound=outbound.
To be fair, it’s not just Apple, mostly Mozilla takes a similar approach: https://twitter.com/voxpelli/status/1286230638526435329
So, we have the browser engine vendors today: Apple with Safari/WebKit, Mozilla with Firefox/Gecko, and Google with Chrome/Blink. Isn’t it kind of weird that so many web standards are being standardized with 2/3 of vendors unwilling to implement them? What’s the process here?
They are drafts, drafts don’t necessarily get passed to be standards. For an example at hand, Geolocation API is a standard (ratified in 2016). Geolocation Sensor is not, it is a draft (last updated in 2018).
Aha. From the title and article, it sounded like Apple refuses to implement standard APIs. So the real story is just that Apple and Mozilla won’t let some harmful APIs get standardized.
Only it doesn’t really matter. Since Chrome is so big, whatever it does is a de-facto standard because Web developers are going to use those APIs, and users are going to blame other browsers for “not working”, which is going to maintain its share because of it.
I would think that Safari on iPhone has enough market share to force web developers to support it. It would surprise me if a commercial website intentionally disregarded MobileSafari support.
Market share of mobile safari is actually quite poor. It’s usually supported despite the market share, as iPhone users are widely regarded valuable users (e.g., more likely to spend money online)
I think it might also depend on where your customers are–even if iOS is only around 15% of the worldwide smartphone market, it’s 58% of the US, 51% of North America, and 26% of Europe.
So the real story is that Google is using its near-monopoly power to circumvent the standards process? There’s some kind of irony here, but I just can’t tell WHAT.
WHAT was a great force when Mozilla needed to pry the Web from Microsoft. It created a standard on which Firefox and later Chrome could build better browsers than IE and win users over. But then Google got big and took the process over, so here we are.
No disagreement here! But worth pointing out that Mozilla could only make that move because Apple and Opera were backing them. I just think the important things to keep in mind about standards organizations is that they are inherently political, and that those with a seat at the table are generally large corporations who answer only to their shareholders. As such, they should be understood as turf where players jockey for competitive advantage by forming temporary strategic alliances. I think everyone paying attention to these things understands how this works, except for some programmers, who I guess are conditioned to treat even draft standards as holy writ descended directly from the inscrutable heavens, or maybe take the rhetoric about “serving users” a little too literally.
But as consolidation erodes consumer choice, there’s less of a game to play, and thus standards become less relevant.
Ignoring the dialog on privacy or surveillance capitalism:
I’m selfishly happy. The best application experiences are native and this forces more shops to make native applications.
Now, if we could just kill electron…
You’re wrong. The best application experience is with apps coded in <something> for <something>. Everyone knows that and no matter how good any other app appears to be, it isn’t.
I mean, anything that isn’t coded in Rust, compiled for a 64 bit ARM core, utilizing an embedded MongoDB and has been strictly linted will ever have any good graphical design or UX. Just plain logical.
I was with you until the sarcasm. I don’t see why we should mock and caricature others’ preferences: we’re all here because we have weird & specific preferences ;)
The intent here is to drive Chrome experiments, not to track users?
It can be abused as something else, we don’t know if that abuse is happening, only thing we know is the intent and that’s not to track individual users but to track browser experiments?
It doesn’t help that Google has just one privacy policy for all its products, which is so vague and full of phrases like “we may [..]” that no one can discern what exactly Google collects and what exactly it’s used for.
For example:
When you’re not signed in to a Google Account, we store the information we collect with unique identifiers tied to the browser, application, or device you’re using. This helps us do things like maintain your language preferences across browsing sessions.
Note that it says “thing like [..]”, not “maintain your preferences”. What else doe it do exactly then? Very vague. Later on it goes to say that:
We collect information about your activity in our services, which we use to do things like recommend a YouTube video you might like. The activity information we collect may include:
[..]
- Activity on third-party sites and apps that use our services
So now we’ve gone from “maintain language preferences” to “activity on third-party sites” (note it’s not clear what “activity” exactly). Much further down it goes on to say that “We may combine the information we collect among our services and across your devices” (no mention of Google account).
If Chrome had a clear privacy policy stating “this is exactly the information we collect, and this is exactly what we use it for” then okay, fair enough. But it doesn’t: it just has this very broad Google policy which basically says “we can do whatever we want with your data” (there are a few restrictions like sharing with 3rd parties, but not many).
Is this data used for that? Probably not. But Google’s refusal to give hard promises on this isn’t exactly inspiring a lot of trust.
This would/could be a great way to distribute internal tools to all different platforms in use – just have a privat tap and it should work, no?
Simplicity of hosting your own tap is definitely a good advantage. The fact that brew will use local git directly is nice. It makes it easy to consume private repos and leaves the authentication to the right tool instead of using an opinonated new authentication way of consuming private taps
Love static sites with Webmentions 👍
I’m running an alternative to webmention.io if someone is interested: https://webmention.herokuapp.com/
It has no dependency on any external JS-library, only has its own small cacheable one that progressively enhances links to mention lists, so works without JS as well, the comments just won’t get in Inked but linked to and they of course won’t be real time updated without the JS either.
So it was true – Edge will move to Chromium and the web will have yet another major browsers that have WebKit origins – a sad day for the web.
Now only Gecko/Servo remains as alternatives of other origins.
Things I haven’t yet understood:
Will Microsoft also use V8 rather than Chakra? And if so, will they as a consequence also drop official development on Chakra and on the Chakra-based Node.js?
There’s now one less closed source browser, I’m not sure how that’s a sad day for the web? If anything the web is more open since all major browser engines (Blink, WebKit, and Gecko) are open source projects and take outside contributions.
Plurality is losing, open implementations are gaining. The open web standards are hurt by a lack of plurality, so even if it’s a win from an implementation perspective, it’s a loss from a standards perspective - and I would say that the loss in the standards perspective outweigh the win of the implementation perspective in this case, in an open web regard.
If you wanted to write your own browser, you might try implementing various standards. However, your success depends on whether other people follow those standards as well. If there are many implementations, even proprietary, then people will make web pages that aim towards the center. If there is only one, then standards won’t matter.
To be fair though, the amount of effort required to write a useful browser from scratch in 2018 is so insanely high than even a corporate behemoth like Microsoft with $$$ oozing out of its ears can’t stomach it. Is that really a use-case worth addressing? Would we really be worse off if there was just a single open source engine that everybody used? Kinda like Linux has become the universal kernel for running native binaries in the cloud…
This problem only worsens when the corporate behemoths consolidate. What are the chances that MS pushes back on a new feature that’s too complex now that they don’t have to implement either?
The new “living standards” make this much, much harder. It is like building on quicksand: you can’t target a stable version of these standards. There’s also no sane changelog to speak of, as far as I know. The RFC standards we used to have were quite sane, but all formalisms are slowly being removed, which makes interoperability unnecessarily hard.
I feel like the Node.js on ChakraCore effort was dead-on-arrival. The Node.js/JavaScript ecosystem already has a hard enough time with native interop that trying to abstract it away was premature. It’s still possible that the ABI Stable Node API work takes off but, sitting here speculating, it doesn’t seem to have enough of a benefit to developers to warrant packages switching.
I would be less sad if Microsoft had chosen Gecko/Servo here but I’m not too sad all the same. I don’t (yet) understand what rendering engine/JavaScript VM diversity really gave web developers. I can get behind browser diversity but it seems like what’s beneath the surface doesn’t matter anymore. I’d point to iOS as an example of this—Safari vs. Chrome is a worthwhile debate but it’s all WKWebView under the hood, and because of that iOS users can all benefit from the performance/battery life and site compatibility.
What plurality amongst engines gives is an insurance that the web will be developed against actual standardized behavior rather than just the implemented version of the majority engine.
There are lots of examples of eg. optimizations that assume that all browsers work like browsers with a WebKit origin does, but such optimization may not at all help in eg. Gecko or even make it worse there.
There are 2 ways to address this: having even more browsers with substantial marketshare or having just one open source rendering engine that is used by all.
And all sites running anywhere on iOS as a consequence suffer from WebKit’s poor and generally laggard support for newer standards.
We’re back to the state of affairs before the Apple / Google collaboration on WebKit fell apart. Same number of web engines under development.
No, it’s less, right? I count WebKit (Apple/Google), Gecko (Mozilla), EdgeHTML (Microsoft) and Presto (Opera). Presto was technically switched out for WebKit before the Blink fork but really they happened at the same time - within a month or two IIRC. Close enough that Opera announced they would switch to Blink instead before almost any sort of work had been done on the switch.
Now all we’ve got is Gecko, WebKit and Blink. And it’s worse than just those numbers would imply because market share these days is more inbalanced in favor of a Blink monopoly ([citation needed]).
Same number of engines but with fewer origins - all except Gecko now share the WebKit origin and has the basis of the architecture choices made in that
This will make it incompatible with GPL:ed projects – right? As GPL does not allow any additional limitations?
Reminds me of the classic JSLint license: https://en.wikipedia.org/wiki/JSLint
That license had “The Software shall be used for Good, not Evil.” in it – which caused quite a few problems.
Not just GPL; it violates the FSF’s definition of Free Software:
The freedom to run the program as you wish, for any purpose (freedom 0).
It violates the Open Source Initiative’s definition of open source:
- No Discrimination Against Persons or Groups
The license must not discriminate against any person or group of persons.
- No Discrimination Against Fields of Endeavor
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
It violates the Debian Free Software Guidelines:
- No Discrimination Against Persons or Groups
The license must not discriminate against any person or group of persons.
- No Discrimination Against Fields of Endeavor
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.)
In other words, it’s proprietary software (with source available)
Reminds me of the need for features like what I proposed in this RFC to Yarn: https://github.com/yarnpkg/rfcs/pull/76 Will try to find time soon to take up work on that one again.
Also: Here is the thread from the previous time this happened: https://lobste.rs/s/eyyiav/npm_package_is_stealing_env_variables_on
Many of the features they claim that IRC doesn’t support it does in fact support nowadays as IRCCloud and others are cooperating on creating modern IRC specifications that make IRC more on par with the Slack experience than classic IRC is. Link: https://ircv3.net/
Unfortunately, I don’t have much faith with IRCv3 - it’s barely been implemented, and one of the IRCv3 people I talked to left and gave up on it - he’s now supporting Matrix, due to it basically being an independent JSON reimplementation of a proposed binary replacement idea for IRCv3 that resolved much of its fundamental problems, and free of IRC “culture.”
Wait, what? MS Github and Gitlab have pull requests, not git.
Not all of us are comfortable with others in our space and looking over our shoulders while we work. And, well, not all of us are capable of looking over other peoples’ shoulders.
In any case, it is possible for two people to do “synchronous” pair programming over an asynchronous communication medium. I’ve done it with an emailed patch ping-ponging back and forth with another person at a speed not much slower than real-time text chat. I’ve used IRC as an out-of-band signaling mechanism for collaborating with someone on a shared branch in a shared repo. For this to really work, both people need to be comfortable with letting the other modify their work.
A buzzword artist might call it “computer-mediated iterative pair programming”. For some sorts of geek, it works well.
Git has pull requests.
That simply outputs a request that you can then copy into eg. an email?
Indeed. A pull request, if you will.
I was going to make this point myself, but it seemed rather pedantic. You can consider the PR features of Git-based forges to a shiny version of
git format-patch
+ mailman — the point of the article about workflow patterns stands.Pair programming is also possible remotely using Etherpad-esque collaborative editors. You don’t even need voice chat, text chat is sufficient. (I’ve done it, though not in a long time.)
The assumption of the tweet is also that such PR:s needs a review / approval from another developer before getting merged – which isn’t true either.
It can be merged by a bot or by the author itself – all depending on the principles one uses in the project.
Some opens PR:s for every substantial change, leaves them open for eg. 48 hours or a week for people to be able to object and otherwise merges them as long as the tests passes.