More importantly, explicitly add it in CI jobs, don’t bake it into your build system. Users grabbing the code shouldn’t see build failures if they use an untested compiler version but no one should be able to add code that introduces warnings with tested compilers.
Why would someone choose to use a non-standard libc?
Was RMS right to insist on using the term GNU/Linux to describe the platform? I don’t want people to think that the software I write only relies on a Linux kernel to work correctly.
The Linux kernel and glibc have been developed and released in tandem for as long as I can remember. I don’t see how swapping out a key component could be expected to work well, even if it works a bit, sometimes.
I’ve worked on a musl port for another kernel, so I’m glad it exists.
It does work well. The DNS issue (which is now resolved) was basically the only glaring issue with musl outside of proprietary software (which often depends on glibc, IMO they should statically link their libc but glibc doesn’t really support that)
Do NUCs still have an internal header for the power switch, like in a standard PC? Maybe the Pi could be resurrected to control that as a GPIO, for remote access :)
Is this an argument? Mobile editing is dog shit. It’s just awful top to bottom. I can’t believe we’re 15 years into iOS, and they still don’t have frigging arrow keys let alone actually useable text editing. Almost daily, I try to edit a URL in the mobile Safari and I mutter that every UX engineer at Apple should be fired.
I don’t really know why you’re singling out Safari, when Google/Chrome have a long history of actually trying to get rid of displaying URLs. And it’s been driven not by “UX engineers”, but primarily by their security team.
(and to be perfectly honest, they’re right that URLs are an awful and confusing abstraction which cause tons of issues, including security problems, and that it would be nice to replace them… the problem is that none of the potential replacements are good enough to fill in)
My point is that I’m not aware of Apple, or “UX engineers on the Safari team”, being the driving force behind trying to eliminate URLs, and that we should strive for accuracy when making claims about such things.
Shrug! Android Play Store, the app, does this. Terrifying! It breaks the chain of trust: Reputable app makers link to an url (thankfully, it’s still a website), but you have to use the app anyway to install anything, which has nowhere to paste the url, let alone see it, so you can’t see if you are installing the legit thing or not. Other than trust their search ranking, the best you can do is compare the content by eye with the website (which doesn’t actually look the same).
I’m reluctant to install third-party apps in general, but, when I do, preserving a chain of trust seems possible for me: if I click a link to, say, https://play.google.com/store/apps/details?id=com.urbandroid.sleep on Android, it opens in the Play Store app; and, if I open such a URL in a Web browser (and I’m signed in to Google), there’s a button to have my Android device install the app. Does either of those work for you?
Wow! That did not work in Firefox just one month ago (when I had to install Ruter on my new phone). Now it does. I tried Vivaldi too, and it doesn’t even ask whether I want to open it in Google Play.
Browser devs to the rescue, I guess, but as long as the app isn’t doing their part – linking to the website – the trust only goes one way.
Does it though? I mean, you’ll spend much longer fiddling to get the text right!
If you think “oh this’ll just be a quick reply” and then end up actually typing more than you thought you would, it makes sense to finish the job you started on mobile, which then actually takes more time. Especially when you’re on the go and you have no laptop with you.
It really just means I use the phone for composing conceptually light things because I don’t want to mess with it any more than necessary. (This is likely an adaptation to the current state versus a defense of how it is.)
I don’t miss arrow keys with iOS Trackpad Mode[1]. The regular text selection method is crap, but it works well enough doing it via Trackpad Mode.
I think part of the problem with the iOS Safari URL bar is that Apple tries to be “smart” and modifies the autocorrect behavior while editing the URL, which in my case, ends up backfiring a whole lot. There’s no option to shut it off, though.
Agreed. Just the other day I found the on screen keyboard on my iPad was floating and I couldn’t figure out how to make it full size again without closing the app. A few days later I had the thought to try to “zoom” out on the keyboard with two fingers and it snapped back into place!
As someone more comfortable with a keyboard and mouse, I often look for a button or menu. When I step back and think about how something might be designed touch first, the iOS UX often makes sense. I just wish I had fewer “how did I not know that before!” moments.
I mean, what meaningful way is there to make it discoverable? You can’t really make a button for everything on a phone.
One other commonly unknown “trick” on ios is that clicking the top bar often works as a HOME key on desktops, but again, I fail to see an easy way to “market” it, besides clippy, or some other annoying tutorial.
Actually, the ‘Tips’ app could actually have these listed instead of the regular useless content. But I do think that we really should make a distinction between expert usage and novices and both should be able to use the phone.
I really don’t have an answer to that. I’ve never looked through the Tips app, not have I been very active in reading iOS-related news[1]. Usually I just go along until I find a pain point that’s too much and then I try to search for a solution or, more often, suffer through it.
[1] I do enjoy the ATP podcast, but the episodes around major Apple events are insufferable as each host casually drops $2,000 or more on brand new hardware, kind of belying their everyman image.
The far more frustrating thing on lobste.rs is that the Apple on-screen keyboard has no back-tick button. On a ‘pro’ device (iPad Pro), they have an emoji button but not the thing I need for editing Markdown. I end up having to copy and paste it from the ‘Markdown formatting available’ link. I wish lobste.rs would detect iOS clients and add a button to insert a backtick into the comment field next to the {post,preview,cancel} set.
Long-press on the single-quote key and you should get a popup with grave, acute etc accents. I use the grave accent (the one on the far left) for the backtick character.
Thank you! As someone else pointed out in this thread, iOS is not great for discovery. I tried searching the web for this and all of the advice I found involved copying and pasting.
Oddly enough, I knew about it for entering non-English letters and have used it to enter accents. It never occurred to me that backtick would be hidden under single quote.
This seems super useful, but I’ve spent the last ten minutes trying to get it to
Enter selection mode using 3D touch
Get the trackpad to not start jittering upward or downwards
It seems either that my phone’s touchscreen is old and inaccurate or I am just really dang bad at using these “newfangled” features.
I agree with your other reply - discoverability is atrocious. I learned that you can double/triple tap the back of your phone to engage an option which blew my mind. I wonder what I’m missing out on by not ever using 3D touch…
Samesies. The funniest bit, at least for me, is that I’m usually just trying to remove levels of the path, or just get back to the raw domain (usually because autocomplete is bizarre sometimes). This would be SUCH an easy affordance to provide since URLs already have structure built-in!
You may already know about this, but if you put the cursor in a text field, and then hold down on the space bar, after a second or two you enter a mode that lets you move the cursor around pretty quickly and accurately.
edit: I guess this is the “trackpad mode” mentioned below by /u/codejake
Arthur C Clarke predicted this in The City And The Stars. In its insanely-far-future society there is a dictum that “no machine shall have any moving parts.”
I wish people would be a little pickier about which predictions they implement and maybe skip the ones made in stories with a dystopian setting. Couldn’t we have sticked to nice predictions, like geostationary satellites?
It’s hidden, but… tap url bar, then hold down space and move cursor to where you want to edit. Now normal actions work ( e.g. double tap to select a word).
The trackpad mode works very poorly on the iPhone SE because you can’t move down since there’s no buffer under the space key, unlike the newer phone types. It doesn’t work well for URLs because the text goes off screen to the right, and it moves very slowly. Ironically I’m on an iPad and I just tried to insert “well” into the last sentence and the trackpad mode put the cursor into the wrong place just as I released my tap. It just sucks. This is not a viable text editing method.
I wish the Matrix team would have more focus. They’re working on all this new experimentall stuff - a new client, a new server etc all the while the existing stuff is severely broken in many ways.
I think you’ve entirely missed the point: we’ve focused specifically on fixing the existing severely broken stuff by writing a client to replace the old broken client. We haven’t written a new server; we added an API to the existing server via a shim, so we could focus and implement faster. There are no new features in Matrix 2.0 (other than native group VoIP) - everything else is either removing stuff (the broken old authentication code in favour of Native OIDC), or fixing stuff (the horrific performance problems, by introducing Sliding Sync and Faster Joins).
With the new server, I was thinking of Dendrite. It’s good you’re fixing Element with Element X, but it feels like it’s been in beta forever, while people keep running into problems with the old Element.
Synapse (the 1st gen server) has simply had the most focus, by far - Dendrite has ended up being a test bed for experimentation. Synapse has improved unrecognisably over the years and now is basically boring stable tech.
What about issues related to e2ee and verification? Some of these have been open for a very long time, and I’ve personally experienced many of these, for years. It definitely gives the impression that the Matrix team has lost focus when problems like these exist in a core feature for Matrix (e2ee).
There are tons more of these issues in your bug tracker, some even reported against the new rust crypto thing, these are just some of the ones I am subscribed to. Is functional E2EE a priority for Matrix?
I can’t remember when I last had this issue because of matrix doing something wrong. So maybe it’s just not happening that often. I personally wouldn’t say it even exists from my experiments..
We rewrote e2ee on a single audit-ready rust codebase rather than chasing the combinatoric explosion of bugs across the various separate web, ios & android implementations, which took ages, but has finally landed with the exception of Web, which should merge next week: https://github.com/vector-im/element-web/issues/21972#issuecomment-1705224936 is the thing yo track. Agreed that this strategy left a lot of people in a bad place while the rewrite happened, but hopefully it will transpire to be the right solution in the end.
I don’t think that the code in the codebase is actually doing the right thing. Signal has inherited a design flaw from the phone network and email: they conflate an identity with a capability. Knowing my phone number should not automatically give you the right to call me.
The thing I want from Signal is the ability to create single-use capabilities that authorise another party to perform key exchange. That lets me generate tokens that I can hand to people (ideally by showing them a QR code from the UI) that let them call me from their Signal account but don’t let them (or whatever social networking malware they’ve grated access to their address book) pass on the ability to call me. Similarly, I want to be able to give a company a token that lets them call me but doesn’t let them share that ability with third parties.
This would also significantly reduce spam. If I have someone’s phone number in my address book and they have mine in theirs, you can grant access automatically, but for anyone else you need to be authorised to send me messages. Spam fighting is the main reason that they claim they keep the server code secret but necessary because of a fundamental design flaw in the protocol.
Unfortunately, Signal wants to add new kinds of identifiers but keep conflating them with capabilities, rather than fixing the problem.
Adding new identifiers will be useful in group chats (currently, I can’t join a group chat without sharing my phone number with everyone there), letting me have a per-group identifier, but that doesn’t help much if one malicious person in the group can leak that identifier and then any spammer can use it to contact me. If they built a capability mechanism then I could authorise members of the group to send me group messages but not authorise anyone else to contact that identity and, if I wanted to have a private chat with a group member, explicitly authorise that one person to send me private messages.
Most of the infrastructure for doing this was already added for sealed senders but I haven’t seen any sign that anyone is working on it.
There are legitimate usability and UX problems with federated and/or decentralised chat platforms. As well as more technical cryptographic hurdles compared to a centralised solution.
However I agree wholeheartedly with your point - there are just problems that need to be solved before any decentralised messaging system is accessible and seamless enough for “normal” users.
Also, if memory serves correct I don’t believe Moxie works with Signal any longer. I think he’s left.
Yes, Moxie wrote at length about the challenges of federation. The main one being the difficulty of coordinating changes and improvements.
In addition to UX, if Signal were widely federated, it might be 100x harder to add PQC like they just did, if it involved convincing every Signal server admin to upgrade.
Rightly or wrongly, federated systems are more ossified, and in the case of something like Signal, that presents future security risks.
In addition to UX, if Signal were widely federated, it might be 100x harder to add PQC like they just did
The change primarily (or even only) affects end-to-end components, meaning the server infrastructure is minimally (or not at all) affected. 100x harder it definitely is not.
federated systems are more ossified
But that is for ideological reasons, not technological ones. Federated systems often emphasise compatibility - that isn’t a technical requirement though. If you are in control of the primary server as well as the main client, you can force changes anyway. It raises the bar for deployments in that federation but that’s a good thing.
I dunno, email is the ultimate federated communication platform, and we still don’t have widespread encrypted email (without relying on a central provider). So maybe it’s not harder because of the server software, but it sure seems a lot harder to me.
I get what you’re saying, but federated systems have much larger consequences than just the server infrastructure. Perhaps I should have said “centralized” instead, since the relevant issue is that Signal is solely responsible for all server and client code. They don’t need to do the slow business of coordination, which we’ve seen from older systems like email/IRC/Jabber, tend to take a long time to get upgraded to the point that improvements can be relied upon.
In another part of lobste.rs right now, Mastodon is being scorched for not acceding to each and every demand put to it by other members of the fediverse. If Mastodon was dominant enough to unilaterally enforce, say, E2EE on ActivityPub, is that decentralized? Would that be a popular move?
My understanding was that he was stepping down as CEO but still very involved with the project. I may be totally wrong on this.
The centralization of Signal and the refusal of any alternative client connecting to the central signal server is a strong decision by Moxie, for a lot of technical reasons I think I understand (I simply disagree with the fact that those technical decisions should take precedence over the moral consequences of them). But, at least, Moxie has a real, strong and opiniated ethic.
I hope that whoever comes next will keep it that way. It is so easy to be lured by the blinking lights when to start to have millions of users. That’s why we should always consider Signal as a temporary solution. It is doomed from the start, by design. In one or ten years, it will be morally bankrupt.
The opposite could be said from the Fediverse. While the official Mastodon project have already showed sign of being “bought”, Mastodon is not needed to keep the fediverse going. It could be (and is already) forked. Or completely different clients can be used (Pleroma is one of the most popular).
I really want to use Signal, and recommend it to my friends and families, but I’m also sick of waiting for them to offer end-to-end encrypted backups on iPhone (it’s apparently possible on Android).
Not going to happen without in-browser code verification, which needs quite a lot of coordination between standardisation bodies and browser vendors. WhatsApp’s approach is not enough.
any program that has network access can listen on ports, so if any malicious code gets localhost:1234 before signal does, it gets all the cookies even if it can’t access your files
The only thing they’d need is to add a “secure mode” to Service Workers, which would prevent all bypasses. The difficulty is of course preventing the abuse of it for persistent client-side takeovers on compromised websites; I don’t know if a permission dialog would be good enough since people don’t actually read what they say.
I mean, it seems like a pretty vital part of how they have chosen to configure their network. Cards on the table, this is pretty similar to my setup (and I don’t work there), but I feel like it’s pretty easy to replace that one step with e.g. Nebula and end up in ideologically the same place
Xe is a prolific blogger whose content gets linked to all the time and is always tinkering with Tailscale in crazy setups. They are so transparent that on the bottom of the linked article is a link to their salary history! This isn’t secretly promotion for Tailscale.
I don’t think this is fair. That information should be in the beginning of the article, or right before first mention of tailscale. I doubt that most readers who got the link to that article from link aggregator will know where Xe works, nor will they end up reading other pages on theirs site.
I dunno - I really enjoy Xe’s blog and have learned a whole lot from it. I don’t mind if they happen to make some pennies sharing their thoughts with us.
From a packaging perspective, Void Linux’s policy is to only get specific versions, ideally from a tarball download (for caching and checksumming purposes). Is there a backlight-auto-0.0.1.tar.gz somewhere I can download?
For build infrastructure, stuff works nicest when we can download an artifact (of source code) and go “This is $X package at $Y version.” This is of course source code so we can build from source, but it’s way nicer and lower maintenance (on the packaging side) than trying to wrangle a git repo, installing git on the builders, etc.
A low effort to achieve this on the software maintainer side is a github or gitlab repo, and you tag a version tag like v0.0.1. The forge then generates a download link like https://github.com/keybase/client/archive/v${version}.tar.gz providing a cachable artifact without doing any repository management. [Downside: github sometimes changes their automatic tarball generation technique, invalidating existing checksums for no apparent reason]
Yep! That’s probably even better ;) but does require a commitment to keeping that tarball around. Or, if your git tool can generate tarballs for tags, and does so repeatably (so the checksums are the same every fetch), this can ease server maintenance hassles.
In the future this’ll change, but I just don’t have the time right now. :) In any case I’ll try to remember to ping you or you’ll see the failure on your end and you can ping me
I just checked chrome://settings/adPrivacy and it appears that everything was already off. Maybe I disabled this some time ago? Anyone else able to check their settings?
It’s also sad how most(all?) of the technology still requires proprietary junk…
Not all - the ReMarkable 1 can be run on 100% libre software, with Parabola-RM. Obviously, since Parabola is FSF-approved you can’t get wifi unless you recompile the kernel, and of course the RM1 is a tablet and thus doesn’t have a keyboard but that’s a surmountable problem.
Great overview of what’s going. I am 100% behind the OpenTF response to this.
If you care about your tools being free/open, it’s best to avoid anything covered by CLA where you can. The one and only purpose for open code to have a CLA is enclosure.
It’s also worth remembering that when a company has all the power to redefine the licensing terms, anything they promise can change at any time.
If you want to relicense something as a new version of the GPL, you would need a cla, right? It’s one of the stated things preventing Linux from relicensing, even if Linus agreed to do it.
That was also a big difference between emacs and xemacs, right?
After pain getting from GPLv1 to GPLv2 the FSF did two things: require copyright assignment to the foundation for a bunch of core GNU projects and inclining a clause in the GPLv2 to allow derived works to be released under the GPLv2 or any later version. Unfortunately Linus dropped that clause from the version used by Linux so Linux is stuck on V2.
Thank you, I didn’t know that reciprocity was the most important aspect for Linus.
I personally think EUPL should become the default license for open-source projects in the business context to promote reciprocity. It’s a weak copyleft license (OSI and FSF approved), so it does not strike mortal fear of “viral” license spread on business partners. Similar to EPL, MPL, and LGPL, it creates an obligation of sharing back the changes made to weak-copyleft licensed portion of the code. Finally, EUPL closes the SaaS loophole, which is similar to a non-existent Affero LGPL license.
Like every software license, until and unless the EUPL is tested in court, it’s an unknown quantity, and will be rejected outright by any legal team worth their salt.
(Every GPL license, including but not limited to GPLv2, GPLv3, and the AGPL, are also generally verboten.)
If you want your software to be usable in a business context, you have precisely two options: Apache or MIT.
In which country/countries were those large orgs established? And how was your project used? e.g. a component of a SaaS offering, a dependency in a deployed application, etc.?
European orgs, used as a dependency in deployed apps shared with customers as well as internal SaaS. Note that in my last comment, I said EPL (Eclipse Public License), not EUPL (European Union Public License). EPL is a well-known per-file (weak) copyleft license, its scope is well understood. If your org forbids EPL, go tell lawyers that Clojure, JUnit, and Eclipse IDE code is licensed under EPL if you want to have some fun. So, the bit about “precisely two options” is not quite true.
I guess that means the EPL has been battle-tested in European courts – cool! News to me. Add that one to the list, I suppose, for European organizations.
Sorry for being insufficiently precise. When I said “business context” I meant to describe proprietary and closed-source software produced by a company, which is shipped to or used by customers, and which drives revenue.
I meant to describe proprietary and closed-source software produced by a company, which is shipped to or used by customers, and which drives revenue.
Then your statement of “precisely two options: Apache or MIT” still is incorrect. BSD at the very least has to be included since plenty of proprietary software has been built using BSD-licnsed code.
And then there’s stuff like the Qt dual-license situation where you can accept it under the GPL or pay them for a license that lets you do proprietary stuff. MySQL does that too.
OK, my original statement was incorrect, and should have included the BSD license. Thank you for pointing out my error. I’m not sure if this mistake is relevant to my underlying point(s).
I wonder how this works. The EUPL says you must publish source code while you are “providing access to its essential functionalities”, but it also says you can combine the software with code distributed under a compatible licence, and communicate the derived work under the terms of the compatible licence. That seems like a straightforward way to re-open the SaaS loophole.
Well, you first need to create an OSS project that needs to include the EUPL project code (not a near-identical fork but a sufficiently expressive work, see §1 for the definition of “Derivative Works”). I also imagine the judge in a potential lawsuit to be especially sceptical if your company is both the author of such a slim fork and is the one accused of violating EUPL. I’ve never heard a discussion in a European company on how to use OSS code without fully complying with its license (it’s usually a discussion around “Is using this OSS project legally risky? Should we avoid using it?”). Anyone doing so is clearly risking a lawsuit and I have not seen lawyers in big companies open their companies up to risk.
Note that the venue for legal disputes is the EU country where the authors reside/legally registered, and Belgian law if authors reside outside EU. I expect this to have a requisite effect on non-EU companies thinking of doing something funny with forking/relicensing without doing significant dev work.
In general, I am considering EUPL mostly for libraries and components where the viral aspect of a license would have a chilling effect on downstream users. If you are just pulling an EUPL-licensed library via Maven or pip, you don’t have to do anything. If you made a small fix in a fork, it most likely will not rise to a level of a Derivative Work. If you indeed forked a project and created a lot of your own code around it, then yes, this license indeed allows you to license your major fork under LGPL or what have you. This naturally means that the entity doing so would need to license a significant amount of their work under LGPL.
If you have a deployable project like Mastodon or MongoDB, it makes more sense to apply AGPL to it.
GPLv2 doesn’t contain notice that the software may be distributed by terms of all later versions. It is just a common practice to put this to license notes. In other words source files themselves (i. e. foo.c) contain “or later version” clause in their top comment, but license itself (i. e. file LICENSE) does not
I guess it’s theoretically possible, but I’ve never seen or heard of a case where the CLA made something more open.
There have been cases (Netscape, Sun) of companies relicensing as OSI but in those cases they already held the copyright and there were no CLAs. In practice I think if the project started out free/open and there wasn’t a single Org in charge of it, getting the requisite signatures is probably an impossible task for a non-trivial case.
That depends on the licence itself. Many licences include a section to release under the next version. For example the GPLv3 has this
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
Who decides what counts as a later version of the same license? Could I publish something and call it a successor to the GPL and so therefore a “later version” of GPLv3? I assume not… but how is this actually legally regulated?
The GPL is a document that is owned by the FSF. Unless they’ve changed it, it includes text that specifically prohibits you from creating modified derived works, which always struck me as slightly amusing. Even without that, the ‘as published by the Free Software Foundation’ bit would probably be interpreted as meaning that newer versions must also be published by the FSF.
One of the concerns with this clauses was that FSF leadership might decide that the MIT license is GPLv4. If you have this clause in your license, you have no way of preventing MIT-licensed derived works.
The license typically says who, the GPL says released by the free software foundation.
As another example the cddl lists Sun Microsystems as the license steward, stating they may publish a new version and lists the affect of a new version.
I use NewPipe every day, have an adblocker on my computer and phone and am absolutely shocked when I watch YouTube at other places and see how many ads they are pushing. Even more shocking is how people have gotten used to it.
Of course Alphabet/YouTube has to finance itself somehow, but 11,99€/month for YouTube Premium is definitely overprized. If you consider that YouTube only makes a fraction of a cent per ad-view per person, they could probably be profitable with 0,99€/month, and I would be willing to pay that.
This would not resolve one other major issue I have with YouTube: The website is bloated as hell. NewPipe is smooth and native, runs on F-Droid and allows easy downloading (audio or video).
absolutely shocked when I watch YouTube at other places and see how many ads they are pushing
…
11,99€/month for YouTube Premium is definitely overprized.
Are you sure? ;-P
So, I pay for a family premium account, primarily to disable ads on stuff my kids watch. I have several motivations:
I want them to grow up assuming that it’s reasonable to pay for content (albeit indirectly in this case).
Ad content is so often mental poison.
I want them to grow up unused to omnipresent advertising. It should stand out, and feel intrusive and horrible.
Now that they’re starting to find content they really like (Explosions and Fire is great for kids, if a bit sweary) I’m going to fund them to choose their favourite creator each, via Patreon or similar.
Edited to add: I also run ad-blocking, at a network level. I have no qualms at all about ad-blocking in general, and think that ad-tech is a blight on the world. Please don’t mistake my willingness to pay for YouTube Red (or whatever it is called) as a criticism of adblocking.
Intentionally systematic do you think, or an unintended consequence of the technology?
I’ve not had a lot to do with video creators professionally. But when we did engage one to create some YouTube videos capturing our company and what it was like to work there, I was super impressed. Worth every cent and so much better than anything most amateurs could produce.
I don’t know that I’m interested in trying to divine if people intend to exploit or just accidentally help build systems which do. The purpose of a thing is what it does.
So when the thing does something you consider undesirable, how then do you determine whether to incrementally improve the thing, reform the thing, or burn the thing to the ground?
So you don’t consider what the intended purpose of a thing was, in that process? Even if only for the purpose (heh) of contemplating what to replace it with?
That would be a useful principle for me if you replaced the need to understand the intentionality of systems with understanding the material circumstances and effects of them. Spinoza, Marx, and the second-order cyberneticists had the most useful views on this in my opinion.
Ah, yeah - what I meant was, if you want YouTube content and loathe the ads, paying to remove them probably isn’t a rip-off.
At least, in the proximate transaction. I do wonder how much of that $11 or whatever goes to the content creators. I know it’s zero for some of the content my kids watch, because it turns out swearing Australian chemistry post-docs blowing things up isn’t “monetizable” :)
Hence paying the creators through a platform like Patreon. Although I’d rather not Patreon specifically.
(Edited to add: I remember how much I treasured my few Transformers toys as a child. Hours of joyous, contented, play alone and with others. Sure they were expensive for what they were, physically, and cartoon TV advertising was a part of what enabled that. It’s a similar deal with my kids and Pokemon and MTG cards … total rip-off from one angle (“it’s just a cardboard square”) but then you see them playing happily for hours, inventing entire narratives around the toys. Surely that’s no rip-off?)
This convinced me to give Newpipe a try and omg it is so much better than using Firefox even with uBlock Origin, let alone the Android Youtube app. Thank you so much for the recommendation!
Remember that YouTube Premium also comes with other things, like a music streaming service. Apparently YouTube tentatively thinks 6,99€ is around what the ads are worth.
Much like any creative endeavor, the platform/middleman almost certainly takes a big cut, and then what’s left is probably a Pareto-type distribution where a small number of top-viewed channels get most of the payout.
On the other hand, if you don’t pay for it and don’t watch any ads, then it’s likely that the people creating the videos get nothing as a result.
With tens to hundreds of 2FA accounts… the only sane way to utilize 2FA is just putting them in my password-manager, so they can be quickly found/selected/auto-filled.
4 2FA codes is nice…. but then they would only be for the login on my PC itself, and the password-manager unlock
2FA is for people who don’t use secure passwords. When the password is 20 random characters, used for only one purpose, and locked behind a suitable pass phrase, using a good (slow) password key derivation function, its security is orders of magnitude higher than the typical possibly reused low-entropy memorable password. Unless the security of the service is the kind of joke that stores password in plaintext, 2FA is hardly needed.
Now there’s always the fishing attack, but the only reliable way out of this one is a hardware token, which can authenticate the service you’re logging in.
Fact: for people who use one high-entropy password per service, the security benefits of 2FA are marginal.
Fact: for people who use the same low-entropy password everywhere, the security benefits of 2FA are kind of major.
Conclusion: 2FA is (mostly) for people who don’t use secure passwords. See what I mean?
You need to think about how passwords are compromised exactly:
It’s compromised by another website
It’s brute forced
It’s stored in the clear in the website’s database
It’s fished
My password is immune (1) because I use it for a single website only. It’s immune to (2) because of its high entropy (even if the website stored an unsalted fast hash of it). Finally, (3) is extremely unlikely for websites that make the effort to add 2FA.
That leaves (4), but TOTP can also be fished. Not the shared secret, but once a session is started, attackers can generally do maximum damage (some operations may be password/2FA protected, but that is so far from enough…).
My only reliable solution that actually increases security compared to a password alone, is the possession of a security token: a local procedure that can authenticate the website, as well as being authenticated by it. A software security token could be compromised if my computer is compromised or stolen, so the very best here is a hardware security token. The hardware security token might still be compromised if it is stolen by an adversary that performs power analysis, but stealing its keys won’t help if the real keys are derived from a password I input into it.
Until I have a security token I can use, I’m holding on my passwords for as long as I can.
That’s what I had in mind. Now think of the power such an attacker would have:
If they can fish me into typing my password, they can fish me into typing my TOTP code. They get in in both cases, and there’s a good chance they can change my credential right then and there (and if they do it quick and automated my TOTP code might still be valid, so they can easily change my credentials and lock me out for good).
There is no way to exploit bugs in my local software if they don’t already have meaningful control over my computer. If they do have that control, they can likely log my keystrokes and copy my database. Or failing that, intercept the clipboard and get whatever specific password I’m copying. Even if they can’t steal my TOTP recovery codes (which are most likely stored in my password database, but let’s say I’m paranoid enough to put them elsewhere), they can still log my TOTP temporary code when I’m logged in and again lock me out of my online account.
In both cases, TOTP fails to increase my security. My password manager with its local database makes TOTP utterly useless. Hardware tokens on the other hand can stop fishing attacks. Knowing that fishing is by far the bigger threat, this makes hardware tokens pretty useful.
Now there’s always the fishing attack, but the only reliable way out of this one is a hardware token, which can authenticate the service you’re logging in.
TOTP is also authenticated. You don’t just randomly enter any 6 digit code. It’s a symmetric key that both parties have. Its biggest drawback is that the key can be copied and therefore there’s no guarantee that only one party has it.
A hardware token’s biggest strength is that they can’t easily be copied. Thus, it’s something you, and only you have.
Maybe I wasn’t clear. My point here is that TOTP does not help you identify the website you’re logging in. If you’re logging in scam.example.com and failed to notice it wasn’t the real deal, checking out your phone for the relevant 6 digits won’t help you. And the scam website can then just forward all your credentials (including TOTP) to the real website and steal your identity or whatever.
Hardware tokens are different. Since they’re not passive they can perform (authenticated) key exchange with the service, and if they use a different key for each service (possibly by deriving their private key from the service’s name for instance), trying to log into the scam’s website will just cause you to use a different key, and the login will fail (or at least the scammers won’t even be able to connect you to the real website, let alone perform a real MitM attack. The best they can do is make you believe you’re logged in and trick you into leaking information while you do, but at least they won’t have your credentials.
I recall this study, by Google I think, about how 2FA affected fishing attacks. All of them reduced successful fishing attempts, but only hardware tokens completely eliminated them.
That kind of thing, yes. (I guess this fine wrist band TOTP generator technically counts as a “hardware token”, but it doesn’t use the protocols that completely stops fishing.)
Note that a properly set up phone could probably serve as a hardware token. Obviously the attack surface is much larger, but as long as the phone is unhacked it should work.
They add a time-based component. Regular passwords are practically static. Say your password gets man-in-the-middled in transit, or you type it into something that looks like your bank but isn’t. With the usual 30s TOTP expiry time, those credentials are only good for a few seconds, which limits their usefulness, as an attacker has to use them right away.
In this scenario we’re not protecting against loss of the password store, admittedly.
You can also take the work from postmarketOS and port it over to mobile-nixos, they work pretty similarly. I’ve now ported mobile-nixos to a few old devices and I love it. I now have an ebook reader, a games console and various phones running NixOS.
The user interfaces are never quite right, but it’s great to have a bunch of powerful ARM-based devices with full access to NixOS.
I’m a bit surprised no one has brought up the pinephone yet (or the pro, since that’s the one that’s got usable hardware specs). It has a keyboard case, is relatively modern ARM, and moderately good firmware/software support (YMMV).
I mean, on the one hand, yes I’m lazy. On the other hand, trying to figure out how to sanely create a .deb package has devoured far more hours of my life than it really deserves, and I like Debian.
if you want your software in debian (or any other distro), I doubt it’ll happen automatically for most cases unless you do it yourself. It’s not like distro maintainers are constantly looking for software to package/maintain.
This essay is an admirable display of restraint. I would have been far crueler.
In my experience, protocols that claim to be simple(r) as a selling point are either actually really complex and using “simple” as a form of sarcasm (SOAP), or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)
After years thinking and reading RFCs and various other documents, today, I finally understood. “Simple” refers to “Network” not to “Management Protocol”! So it is a Management Protocol for Simple Networks not a Simple Protocol for Management of Networks
Let’s not forget ASN.1, DCE, and CORBA. Okay, let’s forget those. In comparison SOAP did seem easier because most of the time you could half-ass it by templating a blob of XML body, fire it off, and hopefully get a response.
achieve simplicity by ignoring or handwaving away inconvenient details
Exactly, and the next-order effect is often pushing the complexity (which never went away) towards other parts of the whole-system stack. It’s not “simple”, it’s “the complexity is someone else’s problem”.
Pretty sure that’s because RSS2 is not supposed to contain HTML.
But RSS2 is just really garbage even if people bothered following the spec. Atom should have just called itself RSS3 to keep the brand awareness working.
Well, Winer’s way of arguing was never really via the legal system, it was by being a whiny git in long-winded blog posts. Besides, RSS versions <1.0 were the RDF-flavored ones (hence RSS == RDF Site Summary), and no-one wanted that anymore.
<=1.0 and people kept using 1.0 long after 2.0 existed because some people still wanted that :) Thought those people were mostly made happy by Atom and then 1.0 finally died.
O god, don’t get me started. RSS 2 lacked proper versioning, so Dave Fscking Winer would make edits to the spec and change things and it would still be “2.0”. The spec was handwavey and missing a lot of details, so inconsistencies abounded. Dates were underspecified; to write a real-world-useable RSS parser (circa 2005) you had to basically try a dozen different date format strings and try them all until one worked. IIRC there was also ambiguity about the content of articles, like whether it was to be interpreted as plain text or escaped HTML or literal XHTML. Let alone what text encoding to use.
I could be misremembering details; it’s been nearly 20 years. Meanwhile all discussions about the format, and the development of the actually-sane replacement Atom, were perpetual mud-splattered cat fights due to Winer being such a colossal asshat and several of his opponents being little better. (I’d had my fill of Winer back in the early 90s so I steered clear.)
I’ve been using Migadu for a few years now; they’re great. The best thing about them is that I always get very quick replies from their support teams when needed.
I host the email for a few dozen accounts on their largest account, and it works smoothly. The webmail is okay. No calendar integration or the like, which was a pain point for a few of the users when I migrated from an old GMail service when Google decided to start charging for it.
Best thing to do if you’re considering it is to look at how much mail you’ve sent and received in previous months. I think there’s a half-decent Thunderbird add-on that’ll summarise that information for you if nothing else.
Also, if you’re keen on moving but are pushing the limit on sends then remember that there’s no reason you need to always use their SMTP service! I often use my ISP’s (sadly now undocumented) relay and never had any bother.
Yes, I’m on the Micro plan, and I haven’t come close to the limits. If there were one day when I exceeded the limits I’m sure they wouldn’t mind; if I had higher email flux in general though I’d be happy to pay more.
I just started using them for some things, and the amount of configurability they give you is crazy (in a great way, that is). I’m going to move all of my mail hosting to them some day.
-Werror
is fine, don’t enable it in “production” (e.g. building release tags)More importantly, explicitly add it in CI jobs, don’t bake it into your build system. Users grabbing the code shouldn’t see build failures if they use an untested compiler version but no one should be able to add code that introduces warnings with tested compilers.
Don’t enable it in your build scripts / tools, or at least not without specifically enumerating every check (and even that is risky).
Why would someone choose to use a non-standard libc?
Was RMS right to insist on using the term GNU/Linux to describe the platform? I don’t want people to think that the software I write only relies on a Linux kernel to work correctly.
They cover all of the reasons here https://musl.libc.org/about.html
Because Linux isn’t a monoculture, and musl is (largely) standards-compliant so it’s not really non-standard.
The Linux kernel and glibc have been developed and released in tandem for as long as I can remember. I don’t see how swapping out a key component could be expected to work well, even if it works a bit, sometimes.
I’ve worked on a musl port for another kernel, so I’m glad it exists.
It does work well. The DNS issue (which is now resolved) was basically the only glaring issue with musl outside of proprietary software (which often depends on glibc, IMO they should statically link their libc but glibc doesn’t really support that)
Do NUCs still have an internal header for the power switch, like in a standard PC? Maybe the Pi could be resurrected to control that as a GPIO, for remote access :)
WoL isn’t that hard to set up though, why wouldn’t you just use that, like the author ultimately did?
Is this an argument? Mobile editing is dog shit. It’s just awful top to bottom. I can’t believe we’re 15 years into iOS, and they still don’t have frigging arrow keys let alone actually useable text editing. Almost daily, I try to edit a URL in the mobile Safari and I mutter that every UX engineer at Apple should be fired.
You know the UX engineers on the Safari team would just love to not have to expose the URL at all…
I don’t really know why you’re singling out Safari, when Google/Chrome have a long history of actually trying to get rid of displaying URLs. And it’s been driven not by “UX engineers”, but primarily by their security team.
For example:
https://www.wired.com/story/google-chrome-kill-url-first-steps/
(and to be perfectly honest, they’re right that URLs are an awful and confusing abstraction which cause tons of issues, including security problems, and that it would be nice to replace them… the problem is that none of the potential replacements are good enough to fill in)
Both Apple and Google suck. What’s your point?
My point is that I’m not aware of Apple, or “UX engineers on the Safari team”, being the driving force behind trying to eliminate URLs, and that we should strive for accuracy when making claims about such things.
Do you disagree?
No one claimed that Safari is the driving force for anything. A commenter just brought it up as a source of personal annoyance for them.
Shrug! Android Play Store, the app, does this. Terrifying! It breaks the chain of trust: Reputable app makers link to an url (thankfully, it’s still a website), but you have to use the app anyway to install anything, which has nowhere to paste the url, let alone see it, so you can’t see if you are installing the legit thing or not. Other than trust their search ranking, the best you can do is compare the content by eye with the website (which doesn’t actually look the same).
I’m reluctant to install third-party apps in general, but, when I do, preserving a chain of trust seems possible for me: if I click a link to, say, https://play.google.com/store/apps/details?id=com.urbandroid.sleep on Android, it opens in the Play Store app; and, if I open such a URL in a Web browser (and I’m signed in to Google), there’s a button to have my Android device install the app. Does either of those work for you?
Wow! That did not work in Firefox just one month ago (when I had to install Ruter on my new phone). Now it does. I tried Vivaldi too, and it doesn’t even ask whether I want to open it in Google Play.
Browser devs to the rescue, I guess, but as long as the app isn’t doing their part – linking to the website – the trust only goes one way.
The upside: it reduces the amount of time you want to use your phone, which, for most people, is a good thing.
Does it though? I mean, you’ll spend much longer fiddling to get the text right!
If you think “oh this’ll just be a quick reply” and then end up actually typing more than you thought you would, it makes sense to finish the job you started on mobile, which then actually takes more time. Especially when you’re on the go and you have no laptop with you.
It really just means I use the phone for composing conceptually light things because I don’t want to mess with it any more than necessary. (This is likely an adaptation to the current state versus a defense of how it is.)
I don’t miss arrow keys with iOS Trackpad Mode[1]. The regular text selection method is crap, but it works well enough doing it via Trackpad Mode.
I think part of the problem with the iOS Safari URL bar is that Apple tries to be “smart” and modifies the autocorrect behavior while editing the URL, which in my case, ends up backfiring a whole lot. There’s no option to shut it off, though.
Wow, I had no idea this existed! Apple’s iOS discoverability is atrocious.
Agreed. Just the other day I found the on screen keyboard on my iPad was floating and I couldn’t figure out how to make it full size again without closing the app. A few days later I had the thought to try to “zoom” out on the keyboard with two fingers and it snapped back into place!
As someone more comfortable with a keyboard and mouse, I often look for a button or menu. When I step back and think about how something might be designed touch first, the iOS UX often makes sense. I just wish I had fewer “how did I not know that before!” moments.
I mean, what meaningful way is there to make it discoverable? You can’t really make a button for everything on a phone.
One other commonly unknown “trick” on ios is that clicking the top bar often works as a HOME key on desktops, but again, I fail to see an easy way to “market” it, besides clippy, or some other annoying tutorial.
Actually, the ‘Tips’ app could actually have these listed instead of the regular useless content. But I do think that we really should make a distinction between expert usage and novices and both should be able to use the phone.
I really don’t have an answer to that. I’ve never looked through the Tips app, not have I been very active in reading iOS-related news[1]. Usually I just go along until I find a pain point that’s too much and then I try to search for a solution or, more often, suffer through it.
[1] I do enjoy the ATP podcast, but the episodes around major Apple events are insufferable as each host casually drops $2,000 or more on brand new hardware, kind of belying their everyman image.
The other problem I encounter near daily is not being about to edit the title of a Lobsters post on the phone. It really sucks.
The far more frustrating thing on lobste.rs is that the Apple on-screen keyboard has no back-tick button. On a ‘pro’ device (iPad Pro), they have an emoji button but not the thing I need for editing Markdown. I end up having to copy and paste it from the ‘Markdown formatting available’ link. I wish lobste.rs would detect iOS clients and add a button to insert a backtick into the comment field next to the {post,preview,cancel} set.
Long-press on the single-quote key and you should get a popup with grave, acute etc accents. I use the grave accent (the one on the far left) for the backtick character.
Edit testing if
this actually works
. It does!Thank you! As someone else pointed out in this thread, iOS is not great for discovery. I tried searching the web for this and all of the advice I found involved copying and pasting.
This is a general mechanism used to (among other things) input non english letters: https://support.apple.com/guide/ipad/enter-characters-with-diacritical-marks-ipadb05adc28/ipados
Oddly enough, I knew about it for entering non-English letters and have used it to enter accents. It never occurred to me that backtick would be hidden under single quote.
You can make a backtick by
holding down
on single quote until backtick pops up, but it’s pretty slow going.This seems super useful, but I’ve spent the last ten minutes trying to get it to
It seems either that my phone’s touchscreen is old and inaccurate or I am just really dang bad at using these “newfangled” features.
I agree with your other reply - discoverability is atrocious. I learned that you can double/triple tap the back of your phone to engage an option which blew my mind. I wonder what I’m missing out on by not ever using 3D touch…
Samesies. The funniest bit, at least for me, is that I’m usually just trying to remove levels of the path, or just get back to the raw domain (usually because autocomplete is bizarre sometimes). This would be SUCH an easy affordance to provide since URLs already have structure built-in!
You may already know about this, but if you put the cursor in a text field, and then hold down on the space bar, after a second or two you enter a mode that lets you move the cursor around pretty quickly and accurately.
edit: I guess this is the “trackpad mode” mentioned below by /u/codejake
I find the trick of pressing down on spacebar to move the cursor works pretty well.
It’s okay but it’s still not as good as digital input for precision.
The problem is that Apple phones don’t have buttons.
No phones do anymore, it seems…
Arthur C Clarke predicted this in The City And The Stars. In its insanely-far-future society there is a dictum that “no machine shall have any moving parts.”
I wish people would be a little pickier about which predictions they implement and maybe skip the ones made in stories with a dystopian setting. Couldn’t we have sticked to nice predictions, like geostationary satellites?
It’s hidden, but… tap url bar, then hold down space and move cursor to where you want to edit. Now normal actions work ( e.g. double tap to select a word).
That said I agree with your second sentence.
The trackpad mode works very poorly on the iPhone SE because you can’t move down since there’s no buffer under the space key, unlike the newer phone types. It doesn’t work well for URLs because the text goes off screen to the right, and it moves very slowly. Ironically I’m on an iPad and I just tried to insert “well” into the last sentence and the trackpad mode put the cursor into the wrong place just as I released my tap. It just sucks. This is not a viable text editing method.
I wish the Matrix team would have more focus. They’re working on all this new experimentall stuff - a new client, a new server etc all the while the existing stuff is severely broken in many ways.
I think you’ve entirely missed the point: we’ve focused specifically on fixing the existing severely broken stuff by writing a client to replace the old broken client. We haven’t written a new server; we added an API to the existing server via a shim, so we could focus and implement faster. There are no new features in Matrix 2.0 (other than native group VoIP) - everything else is either removing stuff (the broken old authentication code in favour of Native OIDC), or fixing stuff (the horrific performance problems, by introducing Sliding Sync and Faster Joins).
With the new server, I was thinking of Dendrite. It’s good you’re fixing Element with Element X, but it feels like it’s been in beta forever, while people keep running into problems with the old Element.
Synapse (the 1st gen server) has simply had the most focus, by far - Dendrite has ended up being a test bed for experimentation. Synapse has improved unrecognisably over the years and now is basically boring stable tech.
What about issues related to e2ee and verification? Some of these have been open for a very long time, and I’ve personally experienced many of these, for years. It definitely gives the impression that the Matrix team has lost focus when problems like these exist in a core feature for Matrix (e2ee).
https://github.com/vector-im/element-android/issues/5305
https://github.com/vector-im/element-android/issues/2889
https://github.com/vector-im/element-android/issues/1721
There are tons more of these issues in your bug tracker, some even reported against the new rust crypto thing, these are just some of the ones I am subscribed to. Is functional E2EE a priority for Matrix?
I can’t remember when I last had this issue because of matrix doing something wrong. So maybe it’s just not happening that often. I personally wouldn’t say it even exists from my experiments..
We rewrote e2ee on a single audit-ready rust codebase rather than chasing the combinatoric explosion of bugs across the various separate web, ios & android implementations, which took ages, but has finally landed with the exception of Web, which should merge next week: https://github.com/vector-im/element-web/issues/21972#issuecomment-1705224936 is the thing yo track. Agreed that this strategy left a lot of people in a bad place while the rewrite happened, but hopefully it will transpire to be the right solution in the end.
What if Signal got a post-phonenumber makeover. ;]
I’d very much like that. Apparently it’s already in the codebase they just need to turn it on. Sick of waiting for this to be honest.
I don’t think that the code in the codebase is actually doing the right thing. Signal has inherited a design flaw from the phone network and email: they conflate an identity with a capability. Knowing my phone number should not automatically give you the right to call me.
The thing I want from Signal is the ability to create single-use capabilities that authorise another party to perform key exchange. That lets me generate tokens that I can hand to people (ideally by showing them a QR code from the UI) that let them call me from their Signal account but don’t let them (or whatever social networking malware they’ve grated access to their address book) pass on the ability to call me. Similarly, I want to be able to give a company a token that lets them call me but doesn’t let them share that ability with third parties.
This would also significantly reduce spam. If I have someone’s phone number in my address book and they have mine in theirs, you can grant access automatically, but for anyone else you need to be authorised to send me messages. Spam fighting is the main reason that they claim they keep the server code secret but necessary because of a fundamental design flaw in the protocol.
Unfortunately, Signal wants to add new kinds of identifiers but keep conflating them with capabilities, rather than fixing the problem.
Adding new identifiers will be useful in group chats (currently, I can’t join a group chat without sharing my phone number with everyone there), letting me have a per-group identifier, but that doesn’t help much if one malicious person in the group can leak that identifier and then any spammer can use it to contact me. If they built a capability mechanism then I could authorise members of the group to send me group messages but not authorise anyone else to contact that identity and, if I wanted to have a private chat with a group member, explicitly authorise that one person to send me private messages.
Most of the infrastructure for doing this was already added for sealed senders but I haven’t seen any sign that anyone is working on it.
Recently encountered something that works exactly like that btw: https://simplex.chat
Super interesting I’d heard of simplex but never looked into it much. their white paper is so far an interesting read
What it illustrates is that we simply can’t rely on a centralized service.
I strongly believe that Moxie is doing all he can with good faith. That Signal is “good”.
But any centralized authority, even if benevolent, cannot be a long term solution. Even if it is “easier”.
There are legitimate usability and UX problems with federated and/or decentralised chat platforms. As well as more technical cryptographic hurdles compared to a centralised solution. However I agree wholeheartedly with your point - there are just problems that need to be solved before any decentralised messaging system is accessible and seamless enough for “normal” users.
Also, if memory serves correct I don’t believe Moxie works with Signal any longer. I think he’s left.
Yes, Moxie wrote at length about the challenges of federation. The main one being the difficulty of coordinating changes and improvements.
In addition to UX, if Signal were widely federated, it might be 100x harder to add PQC like they just did, if it involved convincing every Signal server admin to upgrade.
Rightly or wrongly, federated systems are more ossified, and in the case of something like Signal, that presents future security risks.
The change primarily (or even only) affects end-to-end components, meaning the server infrastructure is minimally (or not at all) affected. 100x harder it definitely is not.
But that is for ideological reasons, not technological ones. Federated systems often emphasise compatibility - that isn’t a technical requirement though. If you are in control of the primary server as well as the main client, you can force changes anyway. It raises the bar for deployments in that federation but that’s a good thing.
I dunno, email is the ultimate federated communication platform, and we still don’t have widespread encrypted email (without relying on a central provider). So maybe it’s not harder because of the server software, but it sure seems a lot harder to me.
What federated systems have E2EE enabled? I’m genuinely curious, because AFAIK systems like Matrix and Mastodon don’t. But I may be wrong.
matrix and xmpp support e2ee.
I could swear there was an article on here just recently about how E2EE in Matrix adds a ton of complexity.
Thanks for replying. I don’t get why people get their panties in a twist over Signal when these alternatives exist.
Matrix is pretty awful from a normal user‘s pov - slow, inconsistent, buggy. I think that’s why signal is much more widely used
I get what you’re saying, but federated systems have much larger consequences than just the server infrastructure. Perhaps I should have said “centralized” instead, since the relevant issue is that Signal is solely responsible for all server and client code. They don’t need to do the slow business of coordination, which we’ve seen from older systems like email/IRC/Jabber, tend to take a long time to get upgraded to the point that improvements can be relied upon.
In another part of lobste.rs right now, Mastodon is being scorched for not acceding to each and every demand put to it by other members of the fediverse. If Mastodon was dominant enough to unilaterally enforce, say, E2EE on ActivityPub, is that decentralized? Would that be a popular move?
Moxie is no longer CEO of Signal, by the way: https://www.theverge.com/2022/1/10/22876891/signal-ceo-steps-down-moxie-marlinspike-encryption-cryptocurrency
My understanding was that he was stepping down as CEO but still very involved with the project. I may be totally wrong on this.
The centralization of Signal and the refusal of any alternative client connecting to the central signal server is a strong decision by Moxie, for a lot of technical reasons I think I understand (I simply disagree with the fact that those technical decisions should take precedence over the moral consequences of them). But, at least, Moxie has a real, strong and opiniated ethic.
I hope that whoever comes next will keep it that way. It is so easy to be lured by the blinking lights when to start to have millions of users. That’s why we should always consider Signal as a temporary solution. It is doomed from the start, by design. In one or ten years, it will be morally bankrupt.
The opposite could be said from the Fediverse. While the official Mastodon project have already showed sign of being “bought”, Mastodon is not needed to keep the fediverse going. It could be (and is already) forked. Or completely different clients can be used (Pleroma is one of the most popular).
Whenever I hear this I think of how Whisperfish is a thing and how I should look at https://molly.im/
Those fork were, at first, really criticized. If I remember correctly, they were even briefly blocked.
Due to the social pressure, Signal is now mostly ignoring them but they are really not welcome.
I really want to use Signal, and recommend it to my friends and families, but I’m also sick of waiting for them to offer end-to-end encrypted backups on iPhone (it’s apparently possible on Android).
I’d like a browser client.
Not going to happen without in-browser code verification, which needs quite a lot of coordination between standardisation bodies and browser vendors. WhatsApp’s approach is not enough.
Running a local server that had a browser interface would be no problem.
any program that has network access can listen on ports, so if any malicious code gets
localhost:1234
before signal does, it gets all the cookies even if it can’t access your filesIsn’t this more of a concern of the security of a machine? Wouldn’t key loggers and others be more of a concern?
The only thing they’d need is to add a “secure mode” to Service Workers, which would prevent all bypasses. The difficulty is of course preventing the abuse of it for persistent client-side takeovers on compromised websites; I don’t know if a permission dialog would be good enough since people don’t actually read what they say.
what if they could be signed and could store data to be readable only by workers signed with the same key?
Me too. My browser is at least decently accessible to me, the Signal desktop client is not.
This looks interesting. However, one weird nit. Recommends Tailscale as part of Setup. Why? Ah. Works at Tailscale
I mean, it seems like a pretty vital part of how they have chosen to configure their network. Cards on the table, this is pretty similar to my setup (and I don’t work there), but I feel like it’s pretty easy to replace that one step with e.g. Nebula and end up in ideologically the same place
I don’t mind it as long as people disclose commercial relationships up front.
Not really.
Xe is a prolific blogger whose content gets linked to all the time and is always tinkering with Tailscale in crazy setups. They are so transparent that on the bottom of the linked article is a link to their salary history! This isn’t secretly promotion for Tailscale.
I don’t think this is fair. That information should be in the beginning of the article, or right before first mention of tailscale. I doubt that most readers who got the link to that article from link aggregator will know where Xe works, nor will they end up reading other pages on theirs site.
It definitely reads like one to me.
coupled with the fact that there are now ads on xe’s site, the incentives are starting to feel weird.
I dunno - I really enjoy Xe’s blog and have learned a whole lot from it. I don’t mind if they happen to make some pennies sharing their thoughts with us.
I was thinking the same.
Luckily there’s also an alternative to using their service:
https://github.com/juanfont/headscale
Hey, is there a place I can download a versioned tarball of the software?
The zip contains the git repo, so you can checkout any version you want.
Or did you mean provide a binary? I have no intentions on providing binaries.
From a packaging perspective, Void Linux’s policy is to only get specific versions, ideally from a tarball download (for caching and checksumming purposes). Is there a backlight-auto-0.0.1.tar.gz somewhere I can download?
Isn’t it best to clone a repo or download the source and host, instead of pulling from sources which may not be online in the future?
Is it enough to host the git repo on len.falken.directory? Then once again, you can download any version…
Ultimately I can host that file but there’s no promises of it existing forever.
For build infrastructure, stuff works nicest when we can download an artifact (of source code) and go “This is $X package at $Y version.” This is of course source code so we can build from source, but it’s way nicer and lower maintenance (on the packaging side) than trying to wrangle a git repo, installing git on the builders, etc.
A low effort to achieve this on the software maintainer side is a github or gitlab repo, and you tag a version tag like
v0.0.1
. The forge then generates a download link likehttps://github.com/keybase/client/archive/v${version}.tar.gz
providing a cachable artifact without doing any repository management. [Downside: github sometimes changes their automatic tarball generation technique, invalidating existing checksums for no apparent reason]I can for sure provide a tag in a git repo, I just don’t want to use github or gitlab :)
Will your forge or git tool provide a URL with a tarball?
this versioned source tarball can also be hosted from a static webserver, like it’s 1999.
Yep! That’s probably even better ;) but does require a commitment to keeping that tarball around. Or, if your git tool can generate tarballs for tags, and does so repeatably (so the checksums are the same every fetch), this can ease server maintenance hassles.
I’ve given in for now and will host it from GitHub https://github.com/lf94/backlight-auto/archive/refs/tags/0.0.1.tar.gz
In the future this’ll change, but I just don’t have the time right now. :) In any case I’ll try to remember to ping you or you’ll see the failure on your end and you can ping me
Thanks! Turns out Void doesn’t yet have Zig 0.11 or it’d be packaged already :)
I’m excited for Zig to break its LLVM dependency so we can update zig without bringing all of LLVM along.
Might be a good chance Zig 0.10 works, I haven’t tried.
I enjoy Zig, I might take a shot at crafting a build.zig for 0.10 and seeing what breaks from there
I just checked
chrome://settings/adPrivacy
and it appears that everything was already off. Maybe I disabled this some time ago? Anyone else able to check their settings?They will absolutely turn these settings back on in any/all future updates. They’ve certainly done it before.
They have? Source? It’s not that I don’t believe you, I’m just curious.
I don’t know if this is what @grawlinson was thinking of, or even if it’s the only example, but my mind immediately went to the time Google got fined millions of dollars for tracking people’s locations even though they had turned the feature off.
you also have to trust that those settings have any real meaning at runtime…
In general I do trust that, yes.
Don’t forget that google has a strong financial incentive to push this.
I have not forgotten that. I just think that having a fake switch would be particular ly bold.
It was bold to remove a public “don’t be evil” statement.
I would love an ARM e-ink laptop.
I’ve seen folks hack super expensive eink displays into existing laptops, but I don’t trust my abilities enough to risk breaking one.
It’s also sad how most(all?) of the technology still requires proprietary junk…
Not all - the ReMarkable 1 can be run on 100% libre software, with Parabola-RM. Obviously, since Parabola is FSF-approved you can’t get wifi unless you recompile the kernel, and of course the RM1 is a tablet and thus doesn’t have a keyboard but that’s a surmountable problem.
https://www.modos.tech/ has some intriguing prototypes of eink laptops, but seem to be waiting for more interest
Great overview of what’s going. I am 100% behind the OpenTF response to this.
If you care about your tools being free/open, it’s best to avoid anything covered by CLA where you can. The one and only purpose for open code to have a CLA is enclosure.
It’s also worth remembering that when a company has all the power to redefine the licensing terms, anything they promise can change at any time.
If you want to relicense something as a new version of the GPL, you would need a cla, right? It’s one of the stated things preventing Linux from relicensing, even if Linus agreed to do it.
That was also a big difference between emacs and xemacs, right?
After pain getting from GPLv1 to GPLv2 the FSF did two things: require copyright assignment to the foundation for a bunch of core GNU projects and inclining a clause in the GPLv2 to allow derived works to be released under the GPLv2 or any later version. Unfortunately Linus dropped that clause from the version used by Linux so Linux is stuck on V2.
Not if you ask Linus.
I’ve never seen such a short and clear description of GPLv2.
Thank you, I didn’t know that reciprocity was the most important aspect for Linus.
I personally think EUPL should become the default license for open-source projects in the business context to promote reciprocity. It’s a weak copyleft license (OSI and FSF approved), so it does not strike mortal fear of “viral” license spread on business partners. Similar to EPL, MPL, and LGPL, it creates an obligation of sharing back the changes made to weak-copyleft licensed portion of the code. Finally, EUPL closes the SaaS loophole, which is similar to a non-existent Affero LGPL license.
Like every software license, until and unless the EUPL is tested in court, it’s an unknown quantity, and will be rejected outright by any legal team worth their salt.
(Every GPL license, including but not limited to GPLv2, GPLv3, and the AGPL, are also generally verboten.)
If you want your software to be usable in a business context, you have precisely two options: Apache or MIT.
The project I am involved in is EPL-licensed and there were no problems with the license at more than a few large orgs.
In which country/countries were those large orgs established? And how was your project used? e.g. a component of a SaaS offering, a dependency in a deployed application, etc.?
European orgs, used as a dependency in deployed apps shared with customers as well as internal SaaS. Note that in my last comment, I said EPL (Eclipse Public License), not EUPL (European Union Public License). EPL is a well-known per-file (weak) copyleft license, its scope is well understood. If your org forbids EPL, go tell lawyers that Clojure, JUnit, and Eclipse IDE code is licensed under EPL if you want to have some fun. So, the bit about “precisely two options” is not quite true.
I guess that means the EPL has been battle-tested in European courts – cool! News to me. Add that one to the list, I suppose, for European organizations.
So… no business uses Linux (GPL) with GNU tools (ditto)? No business uses a BSD operating system or any BSD-licensed code?
Heck, Microsoft used bits of the BSD TCP/IP stack. So I have no idea what your claim here is based on.
Sorry for being insufficiently precise. When I said “business context” I meant to describe proprietary and closed-source software produced by a company, which is shipped to or used by customers, and which drives revenue.
Then your statement of “precisely two options: Apache or MIT” still is incorrect. BSD at the very least has to be included since plenty of proprietary software has been built using BSD-licnsed code.
And then there’s stuff like the Qt dual-license situation where you can accept it under the GPL or pay them for a license that lets you do proprietary stuff. MySQL does that too.
OK, my original statement was incorrect, and should have included the BSD license. Thank you for pointing out my error. I’m not sure if this mistake is relevant to my underlying point(s).
I wonder how this works. The EUPL says you must publish source code while you are “providing access to its essential functionalities”, but it also says you can combine the software with code distributed under a compatible licence, and communicate the derived work under the terms of the compatible licence. That seems like a straightforward way to re-open the SaaS loophole.
Well, you first need to create an OSS project that needs to include the EUPL project code (not a near-identical fork but a sufficiently expressive work, see §1 for the definition of “Derivative Works”). I also imagine the judge in a potential lawsuit to be especially sceptical if your company is both the author of such a slim fork and is the one accused of violating EUPL. I’ve never heard a discussion in a European company on how to use OSS code without fully complying with its license (it’s usually a discussion around “Is using this OSS project legally risky? Should we avoid using it?”). Anyone doing so is clearly risking a lawsuit and I have not seen lawyers in big companies open their companies up to risk.
Note that the venue for legal disputes is the EU country where the authors reside/legally registered, and Belgian law if authors reside outside EU. I expect this to have a requisite effect on non-EU companies thinking of doing something funny with forking/relicensing without doing significant dev work.
In general, I am considering EUPL mostly for libraries and components where the viral aspect of a license would have a chilling effect on downstream users. If you are just pulling an EUPL-licensed library via Maven or pip, you don’t have to do anything. If you made a small fix in a fork, it most likely will not rise to a level of a Derivative Work. If you indeed forked a project and created a lot of your own code around it, then yes, this license indeed allows you to license your major fork under LGPL or what have you. This naturally means that the entity doing so would need to license a significant amount of their work under LGPL.
If you have a deployable project like Mastodon or MongoDB, it makes more sense to apply AGPL to it.
GPLv2 doesn’t contain notice that the software may be distributed by terms of all later versions. It is just a common practice to put this to license notes. In other words source files themselves (i. e.
foo.c
) contain “or later version” clause in their top comment, but license itself (i. e. fileLICENSE
) does notI guess it’s theoretically possible, but I’ve never seen or heard of a case where the CLA made something more open.
There have been cases (Netscape, Sun) of companies relicensing as OSI but in those cases they already held the copyright and there were no CLAs. In practice I think if the project started out free/open and there wasn’t a single Org in charge of it, getting the requisite signatures is probably an impossible task for a non-trivial case.
That depends on the licence itself. Many licences include a section to release under the next version. For example the GPLv3 has this
Who decides what counts as a later version of the same license? Could I publish something and call it a successor to the GPL and so therefore a “later version” of GPLv3? I assume not… but how is this actually legally regulated?
The GPL is a document that is owned by the FSF. Unless they’ve changed it, it includes text that specifically prohibits you from creating modified derived works, which always struck me as slightly amusing. Even without that, the ‘as published by the Free Software Foundation’ bit would probably be interpreted as meaning that newer versions must also be published by the FSF.
One of the concerns with this clauses was that FSF leadership might decide that the MIT license is GPLv4. If you have this clause in your license, you have no way of preventing MIT-licensed derived works.
The license typically says who, the GPL says released by the free software foundation.
As another example the cddl lists Sun Microsystems as the license steward, stating they may publish a new version and lists the affect of a new version.
Contributions to Home Assistant require a CLA, so I’ve always suspected they’ll be heading down this route at some point.
I use NewPipe every day, have an adblocker on my computer and phone and am absolutely shocked when I watch YouTube at other places and see how many ads they are pushing. Even more shocking is how people have gotten used to it.
Of course Alphabet/YouTube has to finance itself somehow, but 11,99€/month for YouTube Premium is definitely overprized. If you consider that YouTube only makes a fraction of a cent per ad-view per person, they could probably be profitable with 0,99€/month, and I would be willing to pay that.
This would not resolve one other major issue I have with YouTube: The website is bloated as hell. NewPipe is smooth and native, runs on F-Droid and allows easy downloading (audio or video).
Are you sure? ;-P
So, I pay for a family premium account, primarily to disable ads on stuff my kids watch. I have several motivations:
Now that they’re starting to find content they really like (Explosions and Fire is great for kids, if a bit sweary) I’m going to fund them to choose their favourite creator each, via Patreon or similar.
Edited to add: I also run ad-blocking, at a network level. I have no qualms at all about ad-blocking in general, and think that ad-tech is a blight on the world. Please don’t mistake my willingness to pay for YouTube Red (or whatever it is called) as a criticism of adblocking.
I will teach my children to know when they’re being ripped off and how they can protect themselves and their valuable time.
But they’re pretty obviously not being ripped off, if they want to watch the content, right?
They may not be “ripped off” but video creators will be either way, for a skillset that was once highly valued.
Can you elaborate on that? Do you mean, ripped off by adblocking, YouTube / other tech aggregators’ models in general, … ?
I am referring directly to the systematic devaluation of their otherwise professional labor.
Intentionally systematic do you think, or an unintended consequence of the technology?
I’ve not had a lot to do with video creators professionally. But when we did engage one to create some YouTube videos capturing our company and what it was like to work there, I was super impressed. Worth every cent and so much better than anything most amateurs could produce.
I don’t know that I’m interested in trying to divine if people intend to exploit or just accidentally help build systems which do. The purpose of a thing is what it does.
So when the thing does something you consider undesirable, how then do you determine whether to incrementally improve the thing, reform the thing, or burn the thing to the ground?
with its measurable circumstances and effects
So you don’t consider what the intended purpose of a thing was, in that process? Even if only for the purpose (heh) of contemplating what to replace it with?
Not interesting.
Consider Chesterton’s Fence.
That would be a useful principle for me if you replaced the need to understand the intentionality of systems with understanding the material circumstances and effects of them. Spinoza, Marx, and the second-order cyberneticists had the most useful views on this in my opinion.
As a child I wanted lots of plastic toys which cost a lot of money. Advertising works really well for doing that!
I wanted them and they were a rip off.
Ah, yeah - what I meant was, if you want YouTube content and loathe the ads, paying to remove them probably isn’t a rip-off.
At least, in the proximate transaction. I do wonder how much of that $11 or whatever goes to the content creators. I know it’s zero for some of the content my kids watch, because it turns out swearing Australian chemistry post-docs blowing things up isn’t “monetizable” :)
Hence paying the creators through a platform like Patreon. Although I’d rather not Patreon specifically.
(Edited to add: I remember how much I treasured my few Transformers toys as a child. Hours of joyous, contented, play alone and with others. Sure they were expensive for what they were, physically, and cartoon TV advertising was a part of what enabled that. It’s a similar deal with my kids and Pokemon and MTG cards … total rip-off from one angle (“it’s just a cardboard square”) but then you see them playing happily for hours, inventing entire narratives around the toys. Surely that’s no rip-off?)
True, but I hope you understood what I meant.
This convinced me to give Newpipe a try and omg it is so much better than using Firefox even with uBlock Origin, let alone the Android Youtube app. Thank you so much for the recommendation!
I’m glad my recommendation was helpful to you! :)
Remember that YouTube Premium also comes with other things, like a music streaming service. Apparently YouTube tentatively thinks 6,99€ is around what the ads are worth.
how much of that makes it back to the artists and people creating videos for google’s platform?
Much like any creative endeavor, the platform/middleman almost certainly takes a big cut, and then what’s left is probably a Pareto-type distribution where a small number of top-viewed channels get most of the payout.
On the other hand, if you don’t pay for it and don’t watch any ads, then it’s likely that the people creating the videos get nothing as a result.
Enough that they find it worth their time to do so, rather than… not.
flaggged as spam, since there’s absolutely no way Oracle is making this statement in good faith.
With tens to hundreds of 2FA accounts… the only sane way to utilize 2FA is just putting them in my password-manager, so they can be quickly found/selected/auto-filled. 4 2FA codes is nice…. but then they would only be for the login on my PC itself, and the password-manager unlock
what’s the point of using 2FA if they are stored in the same place as your passwords?
2FA is for people who don’t use secure passwords. When the password is 20 random characters, used for only one purpose, and locked behind a suitable pass phrase, using a good (slow) password key derivation function, its security is orders of magnitude higher than the typical possibly reused low-entropy memorable password. Unless the security of the service is the kind of joke that stores password in plaintext, 2FA is hardly needed.
Now there’s always the fishing attack, but the only reliable way out of this one is a hardware token, which can authenticate the service you’re logging in.
this… isn’t true at all.
Fact: for people who use one high-entropy password per service, the security benefits of 2FA are marginal.
Fact: for people who use the same low-entropy password everywhere, the security benefits of 2FA are kind of major.
Conclusion: 2FA is (mostly) for people who don’t use secure passwords. See what I mean?
You can have the highest entropy password possible (lol), but it won’t help if it’s compromised… and now 2FA is a major benefit.
You need to think about how passwords are compromised exactly:
My password is immune (1) because I use it for a single website only. It’s immune to (2) because of its high entropy (even if the website stored an unsalted fast hash of it). Finally, (3) is extremely unlikely for websites that make the effort to add 2FA.
That leaves (4), but TOTP can also be fished. Not the shared secret, but once a session is started, attackers can generally do maximum damage (some operations may be password/2FA protected, but that is so far from enough…).
My only reliable solution that actually increases security compared to a password alone, is the possession of a security token: a local procedure that can authenticate the website, as well as being authenticated by it. A software security token could be compromised if my computer is compromised or stolen, so the very best here is a hardware security token. The hardware security token might still be compromised if it is stolen by an adversary that performs power analysis, but stealing its keys won’t help if the real keys are derived from a password I input into it.
Until I have a security token I can use, I’m holding on my passwords for as long as I can.
If you use a password manager, that could be compromised. Basically, do you want to put ultimate trust in your password manager? I don’t, thus 2FA.
How do you compromise KeepassXC? My password manager isn’t a website, it’s local software with a local copy of the encrypted database.
Social engineer your passphrase from you, bug in the software, etc.
That’s what I had in mind. Now think of the power such an attacker would have:
If they can fish me into typing my password, they can fish me into typing my TOTP code. They get in in both cases, and there’s a good chance they can change my credential right then and there (and if they do it quick and automated my TOTP code might still be valid, so they can easily change my credentials and lock me out for good).
There is no way to exploit bugs in my local software if they don’t already have meaningful control over my computer. If they do have that control, they can likely log my keystrokes and copy my database. Or failing that, intercept the clipboard and get whatever specific password I’m copying. Even if they can’t steal my TOTP recovery codes (which are most likely stored in my password database, but let’s say I’m paranoid enough to put them elsewhere), they can still log my TOTP temporary code when I’m logged in and again lock me out of my online account.
In both cases, TOTP fails to increase my security. My password manager with its local database makes TOTP utterly useless. Hardware tokens on the other hand can stop fishing attacks. Knowing that fishing is by far the bigger threat, this makes hardware tokens pretty useful.
TOTP is also authenticated. You don’t just randomly enter any 6 digit code. It’s a symmetric key that both parties have. Its biggest drawback is that the key can be copied and therefore there’s no guarantee that only one party has it.
A hardware token’s biggest strength is that they can’t easily be copied. Thus, it’s something you, and only you have.
Maybe I wasn’t clear. My point here is that TOTP does not help you identify the website you’re logging in. If you’re logging in
scam.example.com
and failed to notice it wasn’t the real deal, checking out your phone for the relevant 6 digits won’t help you. And the scam website can then just forward all your credentials (including TOTP) to the real website and steal your identity or whatever.Hardware tokens are different. Since they’re not passive they can perform (authenticated) key exchange with the service, and if they use a different key for each service (possibly by deriving their private key from the service’s name for instance), trying to log into the scam’s website will just cause you to use a different key, and the login will fail (or at least the scammers won’t even be able to connect you to the real website, let alone perform a real MitM attack. The best they can do is make you believe you’re logged in and trick you into leaking information while you do, but at least they won’t have your credentials.
I recall this study, by Google I think, about how 2FA affected fishing attacks. All of them reduced successful fishing attempts, but only hardware tokens completely eliminated them.
Ah! So you’re talking about something more “modern” — WebAuthn and friends?
None of the hardware tokens I’ve used actually do this yet.
That kind of thing, yes. (I guess this fine wrist band TOTP generator technically counts as a “hardware token”, but it doesn’t use the protocols that completely stops fishing.)
Note that a properly set up phone could probably serve as a hardware token. Obviously the attack surface is much larger, but as long as the phone is unhacked it should work.
They add a time-based component. Regular passwords are practically static. Say your password gets man-in-the-middled in transit, or you type it into something that looks like your bank but isn’t. With the usual 30s TOTP expiry time, those credentials are only good for a few seconds, which limits their usefulness, as an attacker has to use them right away.
In this scenario we’re not protecting against loss of the password store, admittedly.
Does anyone know of any platforms like this available now?
Yes, the MNT Pocket is pretty much a successor of the N900: https://mntre.com/media/reform_md/2022-06-20-introducing-mnt-pocket-reform.html
It’s open hardware, and the indie lab / company that produces them has a track record of delivering their promises.
PinePhone with the keyboard case is not too far off.
N900 with postmarketOS can do all of the “things your iphone can’t”, and it also has functional wifi, a modem, etc.
postmarketOS supports a bunch of devices, you might even have one laying around!
https://wiki.postmarketos.org/wiki/Devices
You can also take the work from postmarketOS and port it over to mobile-nixos, they work pretty similarly. I’ve now ported mobile-nixos to a few old devices and I love it. I now have an ebook reader, a games console and various phones running NixOS.
The user interfaces are never quite right, but it’s great to have a bunch of powerful ARM-based devices with full access to NixOS.
I’m a bit surprised no one has brought up the pinephone yet (or the pro, since that’s the one that’s got usable hardware specs). It has a keyboard case, is relatively modern ARM, and moderately good firmware/software support (YMMV).
modem is 3G, so it won’t work with U.S. carriers at least. there may be a few countries which still have 3G carriers but I don’t know which.
still a great suggestion though.
I love my N900, but it’s also really starved for RAM, even back in the day. I can’t imagine how well a modern Linux stack would run on it.
it runs alright as long as you don’t try to use the “modern” web :D
remiss not to mention the DragonBox Pyra, though “available” is a stretch:
https://pyra-handheld.com/boards/pages/pyra/
This advice would be much stronger with an example
There’s a general lack of information in this article, but you can buy their book!
–> spam
I mean, on the one hand, yes I’m lazy. On the other hand, trying to figure out how to sanely create a .deb package has devoured far more hours of my life than it really deserves, and I like Debian.
The article’s answer seems to be “don’t” — it’s on Debian to package your software for Debian, not on you.
if you want your software in debian (or any other distro), I doubt it’ll happen automatically for most cases unless you do it yourself. It’s not like distro maintainers are constantly looking for software to package/maintain.
Not sure about the tag I used - couldn’t find a more appropriate one, could someone change it?
off-topic?
Not really.
This essay is an admirable display of restraint. I would have been far crueler.
In my experience, protocols that claim to be simple(r) as a selling point are either actually really complex and using “simple” as a form of sarcasm (SOAP), or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)
The “S” in “SNMP” is a vile lie.
via
It’s simple compared to CIMOM, in the same way LDAP is lightweight compared to DAP.
Let’s not forget ASN.1, DCE, and CORBA. Okay, let’s forget those. In comparison SOAP did seem easier because most of the time you could half-ass it by templating a blob of XML body, fire it off, and hopefully get a response.
Exactly, and the next-order effect is often pushing the complexity (which never went away) towards other parts of the whole-system stack. It’s not “simple”, it’s “the complexity is someone else’s problem”.
what’s inconvenient about RSS?
Some of my personal grievances with RSS 2.0 are:
Obviously, neither are too important – RSS works just fine in practice. Still, Atom is way better.
RSS never specified how HTML content should be escaped, for example.
The Atom protocol resolved that however.
Pretty sure that’s because RSS2 is not supposed to contain HTML.
But RSS2 is just really garbage even if people bothered following the spec. Atom should have just called itself RSS3 to keep the brand awareness working.
The RSS trademark (such as it was) was claimed by Dave Winer, who opposed Atom.
But I don’t think every enforced against the RSS1 people whom he also opposed.
Well, Winer’s way of arguing was never really via the legal system, it was by being a whiny git in long-winded blog posts. Besides, RSS versions <1.0 were the RDF-flavored ones (hence RSS == RDF Site Summary), and no-one wanted that anymore.
<=1.0
and people kept using 1.0 long after 2.0 existed because some people still wanted that :) Thought those people were mostly made happy by Atom and then 1.0 finally died.Incorrect, RSS2 was 2002, Atom was 2005.
Teaches me to not read wikipedia correctly.
O god, don’t get me started. RSS 2 lacked proper versioning, so Dave Fscking Winer would make edits to the spec and change things and it would still be “2.0”. The spec was handwavey and missing a lot of details, so inconsistencies abounded. Dates were underspecified; to write a real-world-useable RSS parser (circa 2005) you had to basically try a dozen different date format strings and try them all until one worked. IIRC there was also ambiguity about the content of articles, like whether it was to be interpreted as plain text or escaped HTML or literal XHTML. Let alone what text encoding to use.
I could be misremembering details; it’s been nearly 20 years. Meanwhile all discussions about the format, and the development of the actually-sane replacement Atom, were perpetual mud-splattered cat fights due to Winer being such a colossal asshat and several of his opponents being little better. (I’d had my fill of Winer back in the early 90s so I steered clear.)
Which version of RSS? :)
I see what you did there.
I’ve been using Migadu for a few years now; they’re great. The best thing about them is that I always get very quick replies from their support teams when needed.
The pricing looks great for individual use, but I’m a little concerned about the limits for incoming and outgoing mail.
Are you on the Micro plan? Have you ever exceeded the limit?
I’m on the Micro plan and never come close to the limits. YMMV.
Me too, and I am subscribed to a few mailing lists
I host the email for a few dozen accounts on their largest account, and it works smoothly. The webmail is okay. No calendar integration or the like, which was a pain point for a few of the users when I migrated from an old GMail service when Google decided to start charging for it.
Their support really is excellent.
This might not be what you’re looking for but they do have basic CalDAV support. No web interface for this though.
I wonder what caldav server they use.
I believe it’s sabre/dav.
Best thing to do if you’re considering it is to look at how much mail you’ve sent and received in previous months. I think there’s a half-decent Thunderbird add-on that’ll summarise that information for you if nothing else.
Also, if you’re keen on moving but are pushing the limit on sends then remember that there’s no reason you need to always use their SMTP service! I often use my ISP’s (sadly now undocumented) relay and never had any bother.
I had to bump up to the mini plan to accommodate a family member’s small business running under my account. It was painless.
Yes, I’m on the Micro plan, and I haven’t come close to the limits. If there were one day when I exceeded the limits I’m sure they wouldn’t mind; if I had higher email flux in general though I’d be happy to pay more.
I just started using them for some things, and the amount of configurability they give you is crazy (in a great way, that is). I’m going to move all of my mail hosting to them some day.
only bad thing is their web portal - it doesn’t remember logins and the search is slow / disfunctional
otherwise i love migadu and will always sing its praises
As in the web mail interface? I assumed that was more of a toy/demo since I use IMAP.