Since all of the in place algos here use swap operations and it makes mention of issues like cache efficiency I’m curious how different swap algorithms affect the real world performance of these.
I have an SE/30 I’ve been meaning to recap (it still works but is starting to show some telltale signs). The sad part for me is how rare the expansions and upgrades for it are getting. You could have a 640x480 greyscale internal display and an upgraded processor and a built in ethernet port… if you’re extremely lucky and spend thousands on ebay.
While useful, this isn’t really equivalent to airdrop which is a peer to peer technology, sending the data with an ad-hoc wifi connection between two devices without even touching your router.
As I’ve also mentioned on the other HTML boilerplate, you don’t need to specify the <html>
, <head>
and <body>
tags as they are implied. I will typically include the <html>
tag because it needs a lang
attribute, but I won’t close it.
Omitting these tags reduces the indentation level of the whole document, which in my opinion makes the document more readable for humans.
It seems the conclusion isn’t “8 bits are enough for a version number” and rather “a bunch of userspace software ASSUMED 8 bits would be enough for a version number, so we better just stick with it if we don’t wanna break stuff”
Userspace didn’t just assume that though, it was built into the way that the version codes are exposed in the userspace headers. And with the overflowed version there are now multiple kernel versions that userspace can no longer tell apart by comparing those version codes.
If you run CrossHair on code calling shutil.rmtree, you will destroy your filesystem.
I wonder if there is a way of introducing a sandbox filesystem or similar to work around this?
I’ve been thinking about this a little! Something like pyfakefs might work (and could potentially give you the deterministic behavior that CrossHair also requires). That said, I’m not sure I’d have enough confidence in the completeness of a tool like this to recommend people use it with CrossHair. Possibly?
There are other kinds of effects besides the filesystem of course too: network, peripherals, etc. As I understand it, securely sandboxing Python is quite challenging. Interested in what ideas people here might have for me!
Better abstractions for storage. I was going to say “files” — because it’s terrible how many hoops you have to jump through just to update a file reliably — but really, the filesystem itself is an idea whose time has gone.
I’m fascinated by things like the Apple Newton’s “soup”, a rich data store kind of like a simple object or graph database, that all applications shared. It lets you represent the kind of complex structured data found in the real world, like address books and email, in structured schema that make it globally useful, without having to write a bunch of single-purpose APIs like the iOS/Mac AddressBook framework. From there you can go on to add replication features and a really-global namespace, like IPFS…
Networking needs a do-over too. Not so much at the abstraction level, but better APIs. One of the things I only really learned this year is what a nightmare it is to write real-world TCP networking code on the bare-metal POSIX APIs. It looks simple at first — socket, bind, connect, send, recv — but doing it well requires reading a stack of fat hardback books by Richard Stevens. And don’t even get me started about adding TLS!
I still really regret the way the NeXT folks really took over Mac OS development. Yes, classic Mac OS was a train wreck, but there were still a lot of really good ideas inside Apple that were shunned because of the acquisition.
I was there at the time, and argued a lot with the NeXT folks, but later decided they were mostly right. And of course the marketplace agreed.
’90s Apple had some great ideas, but they weren’t implementable on an ‘operating system’ made out of popsicle sticks and rubber bands. Their major effort at a new OS (Pink/Taligent) was too blue-sky and expensive, and trying to make old and new app APIs coexist in a single process (Copland) was doomed to fail.
I do feel like the politics of the acquisition (which I was late to, starting there in ’04) were really toxic, and a lot of babies were thrown out with the classic bathwater. Oh well.
a lot of babies were thrown out with the classic bathwater
I hear you there, the classic Mac OS had plenty of problems, but it also had a lot of great features that you just don’t have in even the most advanced modern operating systems.
Take for example the GUI first approach to file and system management, you could install an entire OS with drag and drop. You could keep multiple operating system versions on one drive and switch between them by simply renaming a directory and rebooting. In the older smaller versions you could simply drag your operating system onto a floppy disk and suddenly it was available in the startup disk selector. You could go into a GUI utility to create a RAM disk, drag and drop your OS onto it, then select it as your startup disk and reboot and be running entirely on RAM, then unmount your hard disk for super low power computing on a Powerbook.
Modern Apple is Next wearing an Apple skinsuit. I noticed a lot of people who toe the party line over at Apple were the ones from the Next acquisition, whereas the ones less so were there before.
Yeah. I started there in 2004, so feelings were still pretty raw. There were epic arguments about e.g. file extensions, which … yuck.
I like Apple hardware a lot, and I know all of the standard this-is-why-it-is-that-way reasoning. But it’s wild that the new MacBook Pros only have two USB-C ports and can’t be upgraded past 16GB of RAM.
Worse yet, they have “secure boot”, where secure means they’ll only boot an OS signed by Apple.
These aren’t computers. They are Appleances.
Prepare for DRM-enforced planned obsolence.
I would be very surprised if that turned out to be the case. In recent years Apple has been advertising the MacBook Pro to developers, and I find it unlikely they would choose not to support things like Boot Camp or running Linux based OSs. Like most security features, secure boot is likely to annoy a small segment of users who could probably just disable it. A relevant precedent is the addition of System Integrity Protection, which can be disabled with minor difficulty. Most UEFI PCs (to my knowledge) have secure boot enabled by default already.
Personally, I’ve needed to disable SIP once or twice but I can never bring myself to leave it disabled, even though I lived without it for years. I hope my experience with Secure Boot will be similar if I ever get one of these new computers.
Boot Camp
Probably a tangent, but I’m not sure how Boot Camp would fit into the picture here. ARM-based Windows is not freely available to buy, to my knowledge.
Disclaimer: I work for Microsoft, but this is not based on any insider knowledge and is entirely speculation on my part.
Back in the distant past, before Microsoft bought Connectix, there was a product called VirtualPC for Mac, an x86 emulator for PowerPC Macs (some of the code for this ended up in the x86 on Arm emulator on Windows and, I believe, on the Xbox 360 compatibility mode for Xbox One). Connectix bought OEM versions of Windows and sold a bundle of VirtualPC and a Windows version. I can see a few possible paths to something similar:
The likelihood of any of these depends a bit on the economics. In the past, Apple has made a lot of money on Macs and doesn’t actually care if you run *NIX or Windows on them because anyone running Windows on a Mac is still a large profit-making sale. This is far less true with iOS devices, where a big chunk of their revenue comes from other services (And their 30% cut on all App Store sales). If the new Macs are tied more closely to other Apple services, they may wish to discourage people from running another OS. Supporting other operating systems is not free: it increases their testing burden and means that they’ll have to handle support calls from people who managed to screw up their system with some other OS.
Apple’s new Macs conform to one of the new Arm platform specifications
We already definitely know they use their own device trees, no ACPI sadly.
Supporting other operating systems is not free
Yeah, this is why they really won’t help with running other OS on bare metal, their answer to “I want other OS” is virtualization.
They showed a demo (on the previous presentation) of virtualizing amd64 Windows. I suppose a native aarch64 Windows VM would run too.
ARM-based Windows is available for free as .vhdx
VM images if you sign up for the Windows Insider Program, at least
In the previous Apple Silicon presentation, they showed virtualization (with of-course-not-native Windows and who-knows-what-arch Debian, but I suspect both native aarch64 and emulated amd64 VMs would be available). That is their offer to developers. Of course nothing about running alternative OS on bare metal was shown.
Even if secure boot can be disabled (likely – “reduced security” mode is already mentioned in the docs), the support in Linux would require lots of effort. Seems like the iPhone 7 port actually managed to get storage, display, touch, Wi-Fi and Bluetooth working. But of course no GPU because there’s still no open PowerVR driver. And there’s not going to be an Apple GPU driver for a loooong time for sure.
I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.
So now they have a brand-new hardware platform with an expanded trusted base, so why not use it to protect their customers from “bad ideas” like disabling secure boot or side-loading apps? Again, from their perspective they’re not doing anything wrong, or hostile to users; they’re just deciding what is and isn’t a “safe” use of the product.
I for one would be completely unsurprised to discover that the new Apple Silicon boxes were effectively just as locked down as their iOS cousins. You know, for safety.
They’re definitely not blocking downloading apps. Federighi even mentioned universal binaries “downloaded from the web”. Of course you can compile and run any programs. In fact we know you can load unsigned kexts.
Reboot your Mac with Apple silicon into Recovery mode. Set the security level to Reduced security.
Remains to be seen whether that setting allows it to boot any unsigned kernel, but I wouldn’t just assume it doesn’t.
They also went into some detail at WWDC about this, saying that the new Macs will be able to run code in the same contexts existing ones can. The message they want to give is “don’t be afraid of your existing workflow breaking when we change CPU”, so tightening the gatekeeper screws alongside the architecture shift is off the cards.
I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.
For most consumers, buying their first Mac is a high-risk endeavour. It’s a very expensive machine and it doesn’t run any of their existing binaries (especially since they broke Wine with Catalina). Supporting dual boot is Apple’s way of reducing that risk. If you aren’t 100% sure that you’ll like macOS, there’s a migration path away from it that doesn’t involve throwing away the machine: just install Windows and use it like your old machine. Apple doesn’t want you to do that, but by giving you the option of doing it they overcome some of the initial resistance of people switching.
The context has switched, though.
Before, many prospective buyers of Macs used Windows, or needed Windows apps for their jobs.
Now, many more prospective buyers of Macs use iPhones and other iOS devices.
The value proposition of “this Mac runs iOS apps” is now much larger than the value proposition of “you can run Windows on this Mac”.
There’s certainly some truth to that but I would imagine that most iOS users who buy Macs are doing so because iOS doesn’t do everything that they need. For example, the iPad version of PowerPoint is fine for presenting slides but is pretty useless for serious editing. There are probably a lot of other apps where the iOS version is quite cut down and is fine for a small device but is not sufficient for all purposes.
In terms of functionality, there isn’t much difference between macOS and Windows these days, but the UIs are pretty different and both are very different from iOS. There’s still some risk for someone who is happy with iOS on the phone and Windows on the laptop buying a Mac, even if it can run all of their iOS apps. There’s a much bigger psychological barrier for someone who is not particularly computer literate moving to something new, even if it’s quite like similar to something they’re more-or-less used to. There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.
There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.
Not really arguing here, I’m sure you’re right, but how many of those Windows users choose to use Windows, as opposed to having to use it for work?
I don’t think it matters very much. I remember trying to convince people to switch from MS Office ‘97 to OpenOffice around 2002 and the two were incredibly similar back then but people were very nervous about the switch. Novell did some experiments just replacing the Office shortcuts with OpenOffice and found most people didn’t notice at all but the same people were very resistant to switching if you offered them the choice.
Here is the source of truth from WWDC 2020 about the new boot architecture.
People claimed the same thing about T2 equipped intel Macs.
On the T2 intels at least, the OS verification can be disabled. The main reason you can’t just install eg Linux on a T2 Mac is the lack of support for the ssd (which is managed by the T2 itself). Even stuff like ESXi can be used on T2 Macs - you just can’t use the built in SSD.
That’s not to say that it’s impossible they’ve added more strict boot requirements but I’d wager that like with other security enhancements in Macs which cause some to clutch for their pearls, this too can probably be disabled.
… This is the Intel model it replaces: https://support.apple.com/kb/SP818?viewlocale=en_US&locale=en_US
Two TB3/USB-C ports; Max 16GB RAM;
It’s essentially the same laptop, but with a non-intel CPU/iGPU, and with USB4 as a bonus.
Fair point! Toggling between “M1” and “Intel” on the product page flips between 2 ports/4 ports and 16GB RAM/max 32GB RAM, and it’s not clear this is a base model/higher tier toggle. I still think this is pretty stingy, but you’re right – it’s not a new change.
These seem like replacements for the base model 13” MBP, which had similar limitations. Of course, it becomes awkward that the base model now has a much, much better CPU/IGP than the higher-end models.
I assume this is just a “phase 1” type thing. They will probably roll out additional options when their A15 (or whatever their next cpu model is named) ships down the road. Apple has a tendency to be a bit miserly (or conservative, depending on your take) at first, and then the next version looks that much better when it rolls around.
Yeah, they said the transition would take ~2 years, so I assume they’ll slowly go up the stack. I expect the iMacs and 13-16” MacBook Pros to be refreshed next.
Indeed. Could be they wanted to make the new models a bit “developer puny” to keep from cannabalizing the more expensive units (higher end mac pros, imacs) until they have the next rev of cpu ready or something. Who knows the amount of marketing/portfolio wrangling that goes behind the scenes to suss out timings for stuff like this (billion dollar industries), in order to try to hit projected quarterly earnings for a few quarters out down the road.
I think this is exactly right. Developers have never been a core demographic for Apple to sell to - it’s almost accidental that OS X being a great Unix desktop, coupled with software developer’s higher income made Macs so popular with developers (iOS being an income gold mine helped too, of course).
But if you’re launching a new product, you look at what you’re selling best of (iPads and Macbook Air’s) and you iterate on that.
Plus, what developer in their right mind would trust their livelihood to a 1.0 release?!
I think part of the strategy is that they’d rather launch a series of increasingly powerful chips, instead of starting with the most powerful and working their way down - makes for far better presentations. “50% faster!” looks better than “$100 cheaper! (oh, and 30% slower)”.
It also means that they can buy more time for some sort of form-factor update while having competent, if not ideal, machines for developers in-market. I was somewhat surprised at the immediate availability given that these are transition machines. This is likely due to the huge opportunity for lower-priced machines during the pandemic. It is prudent for Apple to get something out for this market right now since an end might be on the horizon.
I’ve seen comments about the Mini being released for this reason, but it’s much more likely that the Air is the product that this demographic will adopt. Desktop computers, even if we are more confined to our homes, have many downsides. Geeks are not always able to understand these, but drive the online conversations. Fans in the Mini and MBP increase the thermal envelope, so they’ll likely be somewhat more favourable for devs and enthusiasts. It’s going to be really interesting to see what exists a year from now. It will be disappointing, if at least some broader changes to the form factor and design aren’t introduced.
Developers have never been a core demographic for Apple to sell to
While this may have been true once, it certainly isn’t anymore. The entire iPhone and iPad ecosystem is underpinned by developers who pretty much need a Mac and Xcode to get anything done. Apple knows that.
Not only that, developers were key to switching throughout the 00s. That Unix shell convinced a lot of us, and we convinced a lot of friends.
In the 00s, Apple was still an underdog. Now they rule the mobile space, their laptops are probably the only ones that make any money in the market, and “Wintel” is basically toast. Apple can afford to piss off most developers (the ones who like the Mac because it’s a nice Unix machine) if it believes doing so will make a better consumer product.
I’ll give you this; developers are not top priority for them. Casual users are still number one by a large margin.
Some points
As seen by this submission, Apple does the bare minimum to accommodate developers. They are certainly not prioritized.
I don’t really think it’s so one-sided towards developers - sure, developers do need to cater for iOS if they want good product outreach, but remember that Apple are also taking a 30% cut on everything in the iOS ecosystem and the margins on their cut will be excellent.
higher end mac pros
Honestly trepidatiously excited to see what kind of replacement apple silicon has for the 28 core xeon mac pro. It will either be a horrific nerfing or an incredible boon for high performance computing.
and can’t be upgraded past 16GB of RAM.
Note that RAM is part of the SoC. You can’t upgrade this afterwards. You must choose the correct amount at checkout.
This is not new to the ARM models. Memory in Mac laptops, and often desktops, has not been expandable for some time.
I really believe that most people (including me) don’t need more than two Thunderbolt 3 ports nowadays. You can get a WiFi or Bluetooth version of pretty much anything nowadays and USB hubs solve the issue when you are at home with many peripherals.
Also, some Thunderbolt 3 displays can charge your laptop and act like a USB hub. They are usually quite expensive but really convenient (that’s what I used at work before COVID-19).
it’s still pretty convenient to have the option of plugging in on the left or right based on where you are sitting so disappointing for that reason
I’m not convinced. A power adapter and a monitor will use up both ports, and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon. Add an external hard drive for Time Machine backups, and now you’re juggling connections regularly rather than just leaving everything plugged in.
On my 4-port MacBook Pro, the power adapter, monitor, and hard drive account for 3 ports. My 4th is taken up with a wireless dongle for my keyboard. Whenever I want to connect my microphone for audio calls or a card reader for photos I have to disconnect something, and my experiences with USB-C hubs have shown them to be unreliable. I’m sure I could spend a hundred dollars and get a better hub – but if I’m spending $1500 on a laptop, I don’t think I should need to.
and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon
Also, many adapters that pass through power and have USB + a video connector of some sort only allow 4k@30Hz (such as Apple’s own USB-C adapters). Often the only way to get 4k@60Hz with a non-Thunderbolt screen is by using a dedicated USB-C DisplayPort Alt Mode adapter, which leaves only one USB-C port for everything else (power, any extra USB devices).
I’ve been trying to get a Mac laptop with 32GB for years. It still doesn’t exist. But that’s not an ARM problem.
Update: Correction, 32GB is supported in Intel MBPs as of this past May. Another update: see the reply! I must have been ignoring the larger sizes.
I think that link says that’s the first 13 inch MacBook Pro with 32GB RAM. I have a 15 inch MBP from mid-2018 with 32GB, so they’ve been around for a couple of years at least.
Another gripe about homebrew, developers are the most likely to have manually installed dylibs, which homebrew throws a fit about when you run brew doctor. The official response is don’t go whitelisting extra things and just ignore the warnings, which is OKAY I guess, but just doesn’t feel friendly to developers to me.
Akira. This is a UI/UX design tool for creating UI mock-ups. It hasn’t even had a stable release yet, and it already has 1,526 stars on GitHub and $560/month pledged on Patreon. It has a really cool mascot too.
Cool mascots is how I judge the merit of all software.
Reading this and realising crummy Linux trackpads is a big part of what drove me to tiling WMs, many years ago.
(Off topic, but) I wonder if a similar effort for Linux desktop responsiveness would be popular. Heavy background system load (I/O or CPU) is still able to drive my i7-6500 driven X11 interface to multi-second-latency for mouse clicks and keypresses. Have tried -ck “desktop latency” kernels and currently using -zen kernels, they’re a little better but it still happens to me most weeks.
Yep, it’s a problem. When I/O load starts maxing out, my X11 just completely locks up also (AMD Phenom II here, so slightly older but still beefy enough by modern standards.)
Interestingly as un-mature as Haiku is in certain respects, I’ve never had this kind of lockup on it. But the kernel schedulers were specifically designed for GUI usecases, so, that is probably a large part of it. I don’t think Linux will start really prioritizing that anytime soon.
I definitely have the same issues, and have since I started using Linux. It feels like it’s only gotten worse, to the point on lower end systems, it can take 30 minutes to get to a VTY to kill the program causing the lag.
Heavy background system load (I/O or CPU) is still able to drive my i7-6500 driven X11 interface to multi-second-latency for mouse clicks and keypresses.
Wow. I don’t think I’ve ever experienced this. I use a stock kernel. Some thoughts/questions:
I was half hoping someone else might have some clues for this. :)
free -m
during/after such events and usually more than half is free.I’m stumped. I run a similarly stripped down environment (Wingo as my WM, no DE, but it’s a “classic” non-compositing WM just like i3) across many machines. They run the gamut from i3 to i7 Intel CPUs, with between 8GB and 64GB of RAM. Some of them use a AMD graphics card to drive three monitors and others just use the Intel embedded graphics to drive one monitor. None of them experience lag like what you’re describing. (When I used to use Chrome, it could be laggy at times, but that was specific to Chrome.)
The only other thing that might be different between our setups is that I run a compositor (compton) on top of my WM, mostly to smooth things out and support transparent windows. But I don’t quite understand how this could eliminate the lag you’re seeing, but maybe worth a try?
The only other lag I can think of is that sometimes my WM lags a tiny but perceptible amount when switching workspaces where one of the workspaces has a lot of windows on it.
What terminal emulator do you use? I use Alacritty now (with tmux), but I used to use Konsole from KDE, and I don’t really notice any lag difference (that is, neither lags for me).
Lag sucks though. Bummer. Wish I had better ideas for you.
Thanks. I will try spinning up compton, just in case.
EDIT: Just started using urxvt recently, was using sakura. Haven’t noticed any difference yet.
I’ve often wondered, but for a while assumed we may have just covered up the problem with the inevitable march forward in hardware performance, if there would be any value in constructing a desktop operating system with hard guarantees on input/feedback latency.
I recently upgraded and moved all of my networking and server equipment into a half-rack:
I am simultaneously jealous/in awe of this setup and relieved that I don’t have it. It looks really cool and well organized. But I always found myself spending a lot of time keeping things like a ddWRT config and dyndns working. It was never quite robust (so maybe I should’ve invested yet more time).
IMO UVerse’s stock router/AP is better than many previous generations stock-ISP gear. Both in terms of performance and reliability.
USRobotics Total Control NETServer 8-modem chassis for my BBS
This part takes the cake, a USR modem bank for a BBS‽ Man, who knew this was still kicking? Is there modern BBS software, or are we still using stuff like PCB/WWIV?
spending a lot of time keeping things like a ddWRT config and dyndns working
I’ve been using ddWRT for years on routers and have never had to do anything beyond the initial setup and the occasional update, do you mind if I ask what sort of things happened that required maintenance?
This rant seems to be a response to something:
Another week comes along and with it, another assault on CSS
Unfortunately I didn’t see this [yet another assault on CSS] so I’m just left scratching my head what arguments he’s defending against.
You can have some of the ones I’ve seen; I’ve seen more than I need. The general format is “CSS has [disadvantage]; […] would be better because [advantage]”. The suggested/outlined/sketched alternative would have disadvantages or problems too, but those aren’t mentioned.
It’s largely a matter of comparing existing software against vapourware, and vapourware never segfaults.
So we tried it in #emacs.
It is barely proof-of-concept. With byte-compiled files, it takes about a minute to render this:
https://mathstodon.xyz/@JordiGH/101416783098500727
It’s using the same sort of game grid as Tetris.
I’m not sure if Emacs can really be made to do this kind of thing.
I’m not sure if Emacs can really be made to do this kind of thing.
Wait, what? Are you telling me that a text editor is not the ideal platform to write an emulator in?!
Of course not; it’s a virtual machine/compiler that happens to ship with a text editor as a demo application.
Unfortunately its display routines leave a bit to be desired.
If EMACS had become more popular, people would be bitching about it as much as they currently complain about web browsers. It would probably have become similarly bloated, too.
That’s the price of success.
I think it’s not a very good comparison; Emacs was designed from the ground up to be an application platform rather than being a document delivery platform that got hijacked. These problems come from overcoming assumptions about everything being text-based, which are painful for this kind of thing, but have nowhere near as wide an impact as “this was designed for publishing physics papers and we’re using it for literally every thing.”
For instance, browsers have had billions of dollars poured into them, but there’s still no standardized way to override key bindings, something that has existed in Emacs for several decades. It’s trivial for an end user to extend the behavior of built-in functionality in Emacs, something that still somehow requires extensions in browsers despite it being a critically important part of the web. Even changing the colors used by a browser to display a page is difficult and badly-documented.
For instance, browsers have had billions of dollars poured into them, but there’s still no standardized way to override key bindings, something that has existed in Emacs for several decades.
That’s not a consequence of the “document/platform” thing. That’s a consequence of Emacs trusting the user and the running application. Web browsers do not, and cannot in good conscience, trust either of them, as well as being designed for people who are more likely to accidentally change their key bindings than they are to intentionally do so (comparing rebinding keyboard commands to moving the taskbar and making half the screen gray, to be specific).
Literally another cost of succeeding so hard.
You don’t have to trust the running application to provide an API for declarative key bindings. Browsers have clearly figured out how to hide advanced features from end users who lack the wherewithal to use them.
to ship with a text editor as a demo application.
… and much of the language is built around manipulation of text in an abstract data type called a “buffer.”
This is cool, however lobste.rs already looks great in text based browsers, I access it in my terminal from w3m all the time :)
Or here’s a wild concept: try different programming methodologies and use what works best for you!
Not related to this article in particular, but maybe we need a BSD tag because that tag soup of “dragonflybsd, freebsd, netbsd, openbsd” is pretty silly looking.
From the about page: “Creating new tags and retiring old tags is done by the community by submitting, discussing, and voting on ‘meta’-tagged requests about them”
“To propose a tag, post a meta thread with the name and description. Explain the scope, list existing stories that should have been tagged, make a case for why people would want to specifically filter it out, and justify the increased complexity for submitters and mods.”
Seriously underestimating the effort since this person believes a PowerPC codebase will just work on Intel after being recompiled.
While the idea of preventing censorship sounds laudable at first glance, I don’t think it’s really a net benefit to most users of a social network. The biggest differentiator between social networks to most users is the user base, who is on it. A major issue I see with these anti-censorship social networks is they’re full of people who don’t want to be censored for much more nefarious reasons than myself. You end up with a user base similar to 8chan.
I also wonder what they do with spam. Nobody thinks spammers should have an undeniable right to post uncensorable spam and could not be deplatformed for spamming. But on a technical level it’s pretty much the same thing.
I am not a fan of Nostr’s design for other reasons, but I am a fan of decentralization and censorship-resistance. I’ll try to make an approachable pitch for it. (Perhaps we need words with less baggage to describe these things… Deplatformed? Silenced? Shadow-banned? Selective restricting? Algorithmic timelines?)
I can go on and on, there is a near-infinite list of examples in modern society of people in positions of power disadvantaging people who are not, to various degrees of atrocity.
These are all things that are addressed by a protocol that is censorship-resistant and decentralized – it removes a single point of control for what the consumers are allowed to interact with.
It’s important to note that moderation is separate from a protocol being censorship-resistant: Moderation is a layer that can be built above it, which can be implemented as an opt-in (like block lists) or as part of a third-party client.
You know a social network (this is a stretch, but bear with me) that has none of these problems? Lobsters.
And yet, it’s fully centralized, has no technical capabilities to prevent users from being banned, shadow banned, censored, deplatformed, etc.
The reason it works is because content quality in a social network is not a technical problem, it’s a social one, and there’s no amount of purely technical solutions that you can throw at it that will solve it.
Lobsters works because it’s small, the community trust itself and the mods, and we create and police our own rules and social protocols.
The more I think about it, the more I feel like maybe extremely massive social networks are a bit of an unsolvable problem. There might be heuristics and technical tools to make it work in limited contexts, but on the general case, it’s just impossible.
It is also gate-kept [1] by being invite-only. Effectively, to be a member of lobsters, you knew someone from an existing social network who was willing to vouch for you by sending an invite. Admittedly, “knew someone from an existing social network” is very broad, and it includes such things as spending 5 minutes proving to someone in an IRC channel that you have something to contribute.
I’m inclined to agree with that. I also tend to flinch when I hear the phrase “at scale”, because humans and human communities just aren’t built to operate that way. Very very few things should truly operate “at scale”. At-scale is fundamentally inhumane, and if you take it to its logical conclusion, you end up with something like the Borg from Star Trek.
[1] Gatekeeping isn’t necessarily a bad thing. It becomes a bad thing when it leads to accumulation and siloing of wealth, power, and knowledge, or when it is based on factors that are entirely out of one’s control, like genetics. Lobsters is gatekept, but it isn’t a knowledge silo. All of the content here, save private messages, is publicly readable. It’s all very transparent and open.
I wrote about a similar observation here: https://srid.ca/niche
Interesting. Could you link to them?
I can’t link to private instances of this, but here’s a couple of publicly discussed ones:
The reality is that the mastodon instance operator owns your identity (in the FQN sense) and your connections (and can prevent you from migrating them), and your post metadata (can’t reproduce your posts elsewhere without losing metadata). This is what makes Mastodon federated rather than decentralized.
There’s no technical reason why this needs to be true (aside from it being easier to implement centralized versions of things), we have proper decentralized solutions for all of these things (nostr only solves 1/3 of these, we can do better), but Mastodon is not one of them (yet).
I agree. For me, a good selling point would be “control your own feed using an API” or “sophisticated search capabilities”. I would like less censorship, as a concept, but in practice it doesn’t really add a lot of value to my daily experience.