1. 4

    I’m probably way too optimistic but I see the natural outcome of this as open source hardware/software tractors :)

    1. 2

      Seems to exist some such initiatives: http://opensourceecology.org/

    1. 3

      So, please forgive my ignorance but reading all the negative responses here - isn’t the fact that we now have a protocol standard for distributed social media an all around good thing?

      1. 9

        The lack of standards has never been an issue – the lack of deployments, independent implementations, momentum and actual interoperability has always been an issue.

        I remember implementing OStatus back in 2012 or so at Flattr, only to find that no client actually implemented the spec well enough to be interoperable with us and that people rather than spending time on trying to fix that instead wanted to convert all standards from XML to JSON, where some like Pubsubhubbub/WebSub took longer to be convert than others, leaving the entire emergent ecosystem in limbo. And later ActivityStreams converted yet again, from JSON to JSON-LD, but then I had moved on to the IndieWeb.

        I find the IndieWeb:s approach to document patterns, find common solutions, standardize such common solutions as small focused, composable standards, and reusing existing web technology as far as possible much more appealing.

        One highlight with that is that one can compose such services in a way where ones main site is even a static site (mine is a Jekyll site for example) but still use interactive components like WebMentions and Micropub.

        Another highlight is that one as a developer can focus ones time on building a really good service for one of those standards and use the rest of them from the community. That way I have for example provided a hosted WebMention endpoint for users during the last 4 years without me having to keep updated with every other apec outside of that space, and the same I’m doing now with a Micropub endpoint.

        Composability and building on existing web technologies also somewhat motivates the entire “lets convert from XML to JSON” trend – HTML is HTML and will stay HTML, so we can focus on building stuff, gaining momentum and critical mass and not just convert our implementations from one standard to the next while fragmenting the entire ecosystem in the process. That also means that standards can evolve progressively and that one can approach decentralized social networks as being a layer that progressively enhances ones blog/personal site one service at a time. Maybe first WebMention receiving? Then sending? Then perhaps some Micropub, WebSub or some Microformats markup? Your choice, it all doesn’t have to happen in a day, it can happen over a year, and that’s just okay. Fits well into an open source scene that wants to promote plurality of participants as well as implementations while also wanting to promote a good work/life balance.

        1. 1

          Unfortunately every time an ActivityPub thread makes it to a news aggregator like this, it always seems like there are some negative comments in the feed from some folks from the indieweb community. It kind of bums me out… part of the goal of the Social Working Group was to try to bridge the historical divide between linked data communities and the indieweb community. While I think we had some success at that within the Social Working Group, clearly divisions remain outside it. Bummer. :(

          1. 1

            Sorry for the negativity – it would help if posts like these presented the larger context so that people doesn’t interpret it as if “ActivityPub has won” which as you say isn’t at all the case, but which this thread here has shown that it can certainly be interpreted as and which the title of this submission also actually implies.

            This gets even more important with the huge popularity of Mastodon as that’s a name many has heard and which they might think is the entirety of the work in that working group, which isn’t the case and is something that everyone has a responsibility in adequately portraying.

            So sorry for the negativity, but it’s great that we both feel that it’s important to portray the entirety of the work of that group!

        1. 1

          I’m kind of excited about payment request, would that integrate with e.g. Apple Pay? Reducing the overhead to paying sites is one of the things that I think could turn the web around. I have wished that e.g. Firefox would put a “$1” button on their toolbar, which would allow you to just give the site a dollar. Practical problems aside, could really improve the best parts of the web.

          1. 1

            PaymentRequest does support Apple Pay and is also supported by Google, Samsung and Microsoft at least - so building a PWA with in-app purchases is very much possible now

            As a side note, I actually built such a browser button that you mention when I was at Flattr + investigated ways to identify the rightful owner of that page so that they could claim the promise of a donation. We never got it to fully work on all sites, but it worked for some of the larger silos, like Twitter and GitHub, and also worked for those who had added rel-payment links to Flattr, but we/I investigated having it crawl peoples as well public identity graphs to try and find a connection between the owner of a page (through eg rel-author link) and a verifiable identity – like their Twitter account or maybe some signed thing Keybase-style. That ended up with the creation of https://github.com/voxpelli/relspider but the crawler was never fully finished (eg. smart recrawling was never implemented) and never put into production. I still like the idea though.

          1. 3

            Ugh. ActivityPub makes me sad – we have so many good, deployed solutions to 80%+ of the social networking stuff, and ActivityPub just ignores all prior art (including prior art by its creators) and does everything from scratch.

            1. 3

              Why in your opinion did ActivityPub “make it” while others have failed?

              Disclosure: I contributed to Rstat.us for a while.

              1. 2

                How do you mean “make it”? You mean mastodon? Because mastodon got popular before it had implemented any ActivityPub, so that’s unrelated :)

                OStatus and IndieWeb tech are still the most widely-deployed non-mastodon (and are partially supported by mastodon as well)

                1. 1

                  Bah, I apologize for not being clear. By “make it”, I mean, why has ActivityPub been promoted as a standard instead of OStatus or IndieWeb or another attempt at a protocol for the same space?

                  1. 3

                    OStatus mostly described a best practice for using other standards in a way that created a decentralized social network – so it never really needed standardization on its own. That + that the people behind it moved towards a next generation standards instead, eg. identi.ca moving to pump.io

                    IndieWeb though is getting standardized by the very same group as has published this recommendation and eg. WebMention and Micropub has been recommendations longer than this one even.

                    1. 3

                      Atom, PubSuHubBub (now WebSub), and Webmention are all standards with various bodies

                      1. 1

                        PubSuHubBub

                        Seeing some silly things they did with regard to best practices I can’t really say I feel bad about this. Things like using GETs instead of POSTs (if memory serves correctly) because of legacy stupid decisions.

                        1. 1

                          Yeah, Webmention was a W3C Recommendation for quite a while now even. I still don’t like how W3C standardized two ways of doing roughly the same thing…

                  2. 2

                    I think AP is an okay standard (although it, again, underspecifies a lot), but it doesn’t make anything possible that wasn’t already possible with OStatus, or some very simple extensions to it.

                    1. 1

                      In what way did you think that ActivityPub did not learn from OStatus?

                      1. 1

                        so many good, deployed solutions to 80%+ of the social networking stuff

                        For example?

                        1. 3

                          friendica, hubzilla, gnu social, pleroma

                          1. 4

                            pleroma

                            Pleroma either currently supports or is very close to fully supporting AP, and was a pretty important goal from the outset.

                            1. 4

                              I know, I wrote it :)

                              1. 2

                                I think I follow you then :) Thanks for writing Pleroma <3

                      1. 5

                        Are there any lightweight ActivityPub implementations that aren’t Mastodon/GNU Social/et al? Every time I try to read the standard, it feels very heavy. I hope it’s not like WebRTC :(

                        1. 10

                          Bridgy Fed is an ActivityPub implementation that translates between Webmention and ActivityPub :)

                          1. 3

                            There’s an implementation report that includes lots of tools: https://activitypub.rocks/implementation-report/

                          1. 2

                            Let’s bring some context:

                            This is a recommendation of the Social Web Working Group, a group that’s behind many specifications like this: https://www.w3.org/Social/WG#Specifications

                            There are standards from both the IndieWeb side (WebMention, Micropub, WebSub) and the ActivityStreams side.

                            The standards may overlap each other but it should be possible for sites and services to support both.

                            1. 5

                              I’m considering paying Pinboard for their web archiving feature, but so far it’s not been a huge pain point.

                              1. 6

                                I use Pinboard’s archiving, just for articles I’ve read and other things where I’d only be mildly annoyed if I lost them, it’s a bit too unreliable for anything else. The archiving time is sporadic, some things get archived in a couple of hours, others can take weeks, and many of my bookmarks say they’re archived but trying to open the archived page just causes an error.

                                I still use it because it’s the only one I’ve found that will archive PDFs and direct links to images. Well, that, and because I paid 5 years in advance.

                                1. 1

                                  Thanks for the review. It’s sad they don’t do the archiving at the moment of bookmarking. That’s what I feel is the best approach, but maybe they have so many users that reaching front of the queue takes week or so?

                                  Considering how you don’t think that good of Pinboard, I’m wondering why you went with buying 5-year service from the beginning.

                                  1. 2

                                    I already had a standard pinboard account grandfathered in from when it was a one-off fee, when I upgraded to an archiving account, and I had been happy enough with that. My thought process was I’d pay in advance and then I would have everything archived and I wouldn’t have to worry about it again for 5 years, I didn’t consider that it would turn out to be less reliable than I’d like.

                                2. 2

                                  I pay for it and use it – my only regret is activating it so late, after having added bookmarks for years – that meant many many bookmarks had already vanished. (Thankfully Pinboard lists all such errors and the specific HTTP code that caused it)

                                  1. 1

                                    I like that they provide all error and HTTP codes. Are there logs too, so you can actually tell when the page stopped being reachable?

                                    1. 2

                                      No, just the error and an option to manually trigger a retry.

                                      It’s added as a machine tag like code:403

                                  2. 2

                                    I joined Pinboard almost exactly 7 years ago and it has already saved my butt a bunch of times. According to my profile page, about 5% of my bookmarks are dead links at this point.

                                    1. 1

                                      It has to be reassuring. Well, they’re not only proving fun statistics, but they’re proving their value to you. I really haven’t heard about Pinboard until today. If there would be a local client for syncinc the archived content locally, then I could consider buying the service and using it, but first I would need to restore my habit of bookmarking that I somehow lost many years ago.

                                    2. 1

                                      Interesting. I guess some bookmark-like service on top of archive.is / web.archive.org could be created. Or maybe there is even already such thing for free.

                                    1. 1

                                      Some other examples of OWFa license:

                                      1. 1

                                        Great to see the work done on the OWFa +7 years ago to enable open reusable specifications be picked up and used to make things like GraphQL available for all, without risking any patent infringements from the people behind it.

                                        1. 2

                                          I use https://soverin.net/ Nice to have an email provider within Europe + one that focuses on privacy and the core feature rather than at a million other things

                                          1. 5

                                            The fact that things like this can happen has long been acknowledged by npm, but not much has happened. See this post from March 2016: http://blog.npmjs.org/post/141702881055/package-install-scripts-vulnerability

                                            Did a RFC for Yarn myself a week ago to try address these very concerns by allowing one to opt-in just the modules one actually needs to run scripts and have the scripts of the rest be ignored: https://github.com/yarnpkg/rfcs/pull/76

                                            1. 1

                                              I like this idea of a common interface for tracing – that way libraries etc that wants to integrate with it doesn’t have to integrate with every tracing system separately, but can rather just implement the standard interface and leave it to each tracing system to have libraries compliant with that.

                                              Seems really easy to get started using it with something like Jaeger and should be possible to write adapters for eg. AWS X-Ray as well.

                                              1. 1

                                                I’m in a process of defining how a changelog is defined and maintained by my company.

                                                It’s interesting that the linked page discourages using git commit logs:

                                                Commit log diffs

                                                Using commit log diffs as changelogs is a bad idea: they’re full of noise. Things like merge commits, commits with obscure titles, documentation changes, etc.

                                                The purpose of a commit is to document a step in the evolution of the source code. Some projects clean up commits, some don’t.

                                                The purpose of a changelog entry is to document the noteworthy difference, often across multiple commits, to communicate them clearly to end users.

                                                I think I would rather keep the logs clean and readable (rebase etc.), or have each feature as a merge commit to master and generate the log from that.

                                                I’m using the same approach for work logs that are mandatory where I work. Worklogs are solely generated from commits. That encourages me to write readable, clean commit messages (no “fix typo”).

                                                I wonder what’s the experience of people here?

                                                1. 3

                                                  I’m a big fan of keeping commit log and changelog separate.

                                                  To me the commit log should tell the story of how the project has evolved from a maintainer perspective – with atomic commits for every small aspect.

                                                  The changelog on the other hand should tell the story of how the project has evolved from a consumer perspective – with emphasis on new and deprecated features, bug fixes etc.

                                                  Especially when it comes to libraries and frameworks this distinction makes me think the two are distinctly different.

                                                  The consumer of a framework/library cares about the public API and how it functions and works while the maintainer of the same cares a lot about internal API:s as well as tests, code quality as well as cares much more in detail about dependency updates, performance tweaks etc. Even a bug fix that merits one single line in a changelog could merit multiple individual commits in the commit log as the public facing bug fix could be a symptom of a much larger issue with the internal code.

                                                  1. 1

                                                    Interesting approach. But you can have multiple commits in a topic branch (atomic commits) that is merged to master and that merge commit would be one visible change to the consumer (“Fix bug X”, “Implement feature Y”). Then the changelog would be just commits directly in master. The added benefit is that you could revert such feature wholly.

                                                    Thanks for your input!

                                                    1. 2

                                                      Yeah, there are ways to embed the consumer oriented changes inside the commit log and make the changelog a subset of the commit log. The “Conventional commits” standard is another such approach: https://conventionalcommits.org/

                                                      One tricky thing with the merge approach is that simple UI:s like GitHub wouldn’t separate the merge commits from the commits it merges in but rather just list them all in the same list – so unless one use a tool that shows the full tree, rather than a simple commit list, then it will be hard to easily separate the important commits from the non-important ones with that approach.

                                                      1. 1

                                                        “Conventional commits” seems very interesting.

                                                        I agree that the merge approach would require some special tooling and can look bad on GitHub but I’m going to give it a try and see how it pans out…

                                                1. 4

                                                  What problem does this solve?

                                                  “JSON is simpler to read and write, and it’s less prone to bugs”

                                                  “For most developers, JSON is far easier to read and write than XML”

                                                  Nobody is generating or reading feeds manually, we do it with libraries. These libraries have been around for 10+ years. Users never get to read the content so aiming for readability is useless.

                                                  My code consists of object calls like feed.item[0].content and feed.author. Why would I need a JSON formated feed? The object calls would be exactly the same, only to another library.

                                                  Again, I honestly don’t get it. Why try to push a format which spec is exactly the same as the previous one, only implemented in another language? These guys have been around for many years so they clearly know what they’re doing.

                                                  I love json and use it everywhere but this seems like a clear case of NIH.

                                                  1. 4

                                                    Parsing a classic RSS feed is a pain though, with so many different edge cases and different implementations that it’s almost impossible to do a good one from scratch. Atom is a little better, but not as wide spread. A simple format with good documentation that’s easy to implement correctly for both new publishers and new readers can be nice.

                                                    It’s not the only one of it’s kind though, with some alternatives being eg. https://indieweb.org/h-feed which comes from the point of view that if one already has a list of posts in ones HTML, why do one need to publish another list of posts? Can’t one just decorate that list so that readers can understand them and that way get a more DRY and possibly more accurate list of posts? (Another problem with RSS-feeds has been that they often get neglected and forgotten so that meta-data such as images become a bit unrepresentative and eg. far worse than the image quality that publishers are giving to Facebook and such, so building feed technology from the perspective of the new social web and the way that Facebook, Twitter etc consumes stuff can have its advantages in data quality)

                                                    1. 3

                                                      Parsing a classic RSS feed is a pain though, with so many different edge cases

                                                      Yes, but the reason is not XML. Whoever did not migrate from RSS to Atom, will not migrate to JSONFeed either.

                                                      1. 2

                                                        I agree, the reason is mainly not XML, although XML contributes in some places as it allows more complex data structures than JSON

                                                        And yeah, adoption will be hard.

                                                      2. 2

                                                        Aside from h-feed, there’s the schema.org markup, which might be more widely supported (i.e. Google). HTML5 itself (possibly with ARIA) contains plenty of ways to mark up a blog.

                                                        I’d still stick to serving Atom to clients that request it though, there’s nothing wrong with the format, and it has precise semantics.

                                                      3. 3

                                                        The only point I disagree with you on is that this is exactly the same as RSS/Atom; it looks simpler and better defined to me. But other than that, I can only offer an anecdote on why an updated feed format might be a good idea:

                                                        When Mastodon was still getting hype, I thought it would be fun to publish my twitter feed on my own server and see who would subscribe to it. But I looked at the specs (RSS/Atom, PubSubHubBub) and thought, “Nah, I don’t want to mess with XML today.” Maybe I’m a bad person. Probably I am. :) But I bet I’m not the only one out there, so having a cleaner, nicer way to construct feeds might lower the barrier and get more people to publish.

                                                        1. 1

                                                          You’re definitely not a bad person, I’d dare say that the real bad guy is the one who invented XML ;)

                                                          However, why didn’t you use a library? It’s been a long time since I interacted with XML manually. Even DOM manipulating is XML manipulating (well, simplified XML) but we do it via selectors (CSS, xpath) and never manually parsing the XML tree.

                                                        2. 3

                                                          These libraries have been around for 10+ years.

                                                          And they still suck. We still see builds fail because a third-party web site stopped hosting a schema. We still see xpaths failing to work until you preload the list of namespaces you’re using or some other such horrible shennanigans.

                                                        1. 3

                                                          I sincerely hope that WebExtensions will gain enough capabilities to convert add-ons like Tab Mix Plus – the fact that Firefox supports such powerful extensions is one of the things that makes it special today – it’s something that no other browser supports.

                                                          Where else would one go to get a similar power-user interface if Firefox will kill Tab Mix Plus and the likes? Custom Electron-based browsers? Not really feasible if one need that very same browser for web development as well, with good development tools etc?

                                                          1. 4

                                                            A sixth Nobel prize in economic science was added in 1969.

                                                            This is not strictly true. It’s not a Nobel prize but rather the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel. So it’s not the money of Alfred Nobel that’s used to pay for that prize.

                                                            1. 3

                                                              WebGL is based on OpenGL, any reason why “WebGPU” can’t be based on Vulkan?

                                                              Also: WebGL was specified by Khronos Group, right? As were WebCL, OpenGL, OpenCL, Vulkan etc? And Apple is a member of Khronos Group? Then why is this suddenly an initiative within W3C rather than yet another Khronos Group spec?

                                                              For someone who doesn’t have a lot of insight into these API:s and specs it sure looks like WebKit/Apple is trying to avoid a WebVulkan so that they can inject some of their Metal into it.

                                                              If so, then the end result will be that there’s four different low-level graphics API:s? Metal, Vulkan, DirectX 12 and WebVulkan. Isn’t that a few API:s more than needed? And won’t that be a portability nightmare?

                                                              1. 3

                                                                WebGL is based on OpenGL, any reason why WebGPU can’t be based on Vulkan?

                                                                Yes there are reasons. Thoughts about a WebGL-Next explains why it can’t be WebVulkan.

                                                              1. 2

                                                                Excited about this – Webmention along with the also in progress Micropub standard and the renamed Pubsubhubbub standard WebSub enables the creation of full featured self-hosted social media profiles in the style of Twitter and beyond.

                                                                One can also create social media apps like Tweetbot for these social media profiles that can both read stuff in realtime (WebSub), post new posts and interactions (Micropub) and let other users know of ones interactions with them (Webmention).

                                                                Lastly they can be integrated into the existing social media platforms and thus be usable from day one by what the IndieWeb movement calls POSSE (and PESOS) and by combining that with services like Brid.gy. That way dodging Metcalfe’s law and getting the full network effect of existing networks from day one while still being able to craft ones own space online with technologies that works fully standalone.

                                                                Also excited about the already fairly large amount of independent implementations of all of these technologies – the specifications are truly proven to be possible to reimplement again and again and not to rely on any unspecified specific implementation detail of any one dominating library for interoperability.

                                                                1. 1

                                                                  It’s nice to see all the informal R&D done by the IndieWeb folks coming together into specs. Having never messed with a Pub/Sub system before, I’m planning on implementing the WebSub spec as a Django app for some semi-dayjob-related learning.

                                                                1. 17

                                                                  An alternative to Disqus is Isso, which is self-hosted.

                                                                  1. [Comment removed by author]

                                                                    1. 4

                                                                      I’ll probably add this to my blog so you all can be mean to me for a change. Good find @hga.

                                                                      1. 7

                                                                        I use Isso on my blog and I absolutely love it.

                                                                        Shameless plug: I wrote an openshift catridge(?) which makes installation of Isso in just one click - link

                                                                      2. 4

                                                                        I wonder if it has any spam protection features, I can see administrative features but not auto-spam rule features

                                                                      3. 4

                                                                        One of the great things about Disqus is that you can use it on a “static” blog. My blog (the one hosting this article) is just GitHub pages with posts written in markdown. This has the advantage of being simple and free (and easily to cache/distribute on CDN etc.) but has drawbacks of not being able to have custom code like that.

                                                                        When my blog was hosted on AppEngine I had self-hosted comments; but Disqus seemed like a much better option. Not so sure about that now though!

                                                                        1. 6

                                                                          I’ve gone back and forth on that, but the solution my current blog is a note at the bottom of each post saying:

                                                                          Comments welcome: mjn@anadrome.org
                                                                          

                                                                          This outsources the infrastructure to email, which already works, with the obvious drawback that the barrier for many people to emailing someone is higher than that for posting a comment. Although that might not be purely a drawback. :-) Another difference of course is that email is private, while some comments might be interesting to other readers, too. I partly remedy that by occasionally posting (attributed) updates at the bottom of a post if someone sends in something I think might be interesting for other readers, as in this example.

                                                                          Besides not wanting to mess with running either a first- or third-party commenting system, the other motivation is that on a personal blog I feel some desire to keep it as a place for my own writing, not as a general third-party discussion forum attached to every page. So if someone sends in relevant comment I’m happy to post it (or a paraphrase), but I don’t necessarily want comments from random people arguing about tangents to be posted underneath my essays.

                                                                          1. 6

                                                                            I’ve thought about not having comments directly (esp. when HN/Reddit/here usually get more comments than directly on the blog), but I do still think they add value. Not only do I get “Thanks!” now and then which lets me know people are finding my posts useful, but there’s often good discussion between people there.

                                                                            I don’t get a lot of bad comments, so the only reason to remove them would be to get rid of the scripts but I think (hope) Disqus cares enough about its reputation that they’ll fix this and be more careful in future.

                                                                            1. 3

                                                                              The discussion between commenters on one thread can lead to discovery of new ideas for those people or blog author. That’s essentially what happens here, on HN, etc. Doesn’t happen with email since the readers don’t know of each others' presence much less interesting comments.

                                                                          2. 3

                                                                            Can vouch for isso for static sites; I have been using it myself for years on my blog - but alas I don’t get the traffic to generate any comments anyway. The only JavaScript on there is isso and Google Analytics.

                                                                            1. 2

                                                                              Oh, based on the above I figured this was self-install and wouldn’t work for static sites. If it can be used directly from their site though, there’s nothing to stop them making the same mistake in the future? =D

                                                                            2. 1

                                                                              I’m doing the same thing. I have static Jekyll blog, although now on Netlify rather than GitHub Pages because then I can use https with my custom domain.

                                                                              I built and hosted my own IndieWeb Disqus alternative though. And it’s open for others to use: https://webmention.herokuapp.com/

                                                                              It uses WebMention (which btw now is a W3C Proposed Recommendation), which removes the need for embedding any authentication mechanisms like Facebook. Instead everyone writes the comments on their own blogs instead and pings my service which then retrieves the comment. I then use a javascript that looks for links to comment pages within my blog and embeds any comments (and any new comments in realtime) inline through basic progressive enhancement (and thus it’s all easily curlable despite the javascriptiness)

                                                                              And there’s other similar services that one can easily self-host. There’s even people who do automatic commits to their static page of any received comments, both from WebMention and through comments form. Been thinking of eventually experimenting with that as well and make my WebMention endpoint talk to my Micropub endpoint (another standard that’s now going through W3C) to submit any received mentions: https://github.com/voxpelli/webpage-micropub-to-github Some are already doing that with their respective endpoints.

                                                                              1. 2

                                                                                I have static Jekyll blog, although now on Netlify rather than GitHub Pages because then I can use https with my custom domain.

                                                                                My blog (hosting this article) is actually custom domain over SSL on GitHub pages (using CloudFlare to add the SSL). It’s not ideal, but was easy to add to the existing GitHub Pages site rather than migrating!

                                                                                It uses WebMention (which btw now is a W3C Proposed Recommendation), which removes the need for embedding any authentication mechanisms like Facebook. Instead everyone writes the comments on their own blogs instead and pings my service which then retrieves the comment.

                                                                                I’d never heard of this, this sounds really interesting - I shall have to read up! Thanks! :-)

                                                                          1. 1

                                                                            Please Google: do not kill linux and other UNIXes by making one more OS that take all the market and is not UNIX-compatible! (Why would they do that?!)

                                                                            On the other way, an OS that can compile ffmpeg, go, and a built-in terminal may not be that bad.

                                                                            1. 4

                                                                              Please Google: do not kill linux and other UNIXes by making one more OS that take all the market and is not UNIX-compatible! (Why would they do that?!)

                                                                              To actually innovate? (I guess systems software research might not be so irrelevant after all.)

                                                                              1. 4

                                                                                Does a new OS like this really have to be unix/linux-incompatible? Wouldn’t it make the most sense to try and stay as compatible as possible while innovating in areas that doesn’t impact compatibility? What would one gain from not staying unix/linux-compatible?

                                                                                1. 2

                                                                                  It would be nice to see something that has learned from the past 40 years of computing, for instance.

                                                                                  1. 2

                                                                                    Got an example of what the benefits would be? I’m honestly curious.

                                                                                    1. 3

                                                                                      Why are we still pretending that our phones (our phones!) have multiple independent interactive users? Why does the concept of a user need to be conflated with the capabilities a process needs to run? Why is there a root user at all? Why are we still pretending that we need teletype compatibility to enable ad-hoc RPC? Why are we still dealing with the ring 0/1 divide?

                                                                                2. 2

                                                                                  kill linux and other UNIXes

                                                                                  Without a comparable number of drivers? Don’t hold your breath.

                                                                                  1. 2

                                                                                    Drivers matter less nowadays. They’re more complex, but there’s a lot fewer of them you need to support. And if your target is things like phones, then you only need to provide for what’s on the device.

                                                                                    1. 6

                                                                                      Drivers matter less nowadays. They’re more complex, but there’s a lot fewer of them you need to support.

                                                                                      Compared to when? I don’t think this is right in any circumstance, and certainly not on a phone.

                                                                                      On a PC in the 90’s you’d have to support an Ethernet card, a sound card, a video card, IDE. floppy disk and CD-RW controllers.

                                                                                      Now on a PC you have to support all of that (because even if it’s on boarded, you still have to support the controllers for it) plus wireless, understanding the proper way to handle an SSD and of course video card support has gotten orders of magnitude more complex.

                                                                                      When you go to a phone you have to add in supporting the cellular radio, GPS, accelerometers, cameras (not just retrieving storage for them but actually controlling them), fingerprint readers and a lot more.

                                                                                      I’d argue that today’s phones are the most complicated target an OS has ever had. In the past the OS would never have been expected to support all that itself. It would have only provided the basics and vendors would have provided drivers for anything. But if you’re writing a mobile OS you have to support all that functionality right out of the box, and do it really well with optimized power usage for longer battery life.