For the first time in a pretty long time, I have both the time and the motivation to pursue some “extracurricular” (that is, non-work-related and non-family-related) projects, and I’m trying to pick a good mix between learning new technical skills, language skills, and hobbies including TTRPGs and music.
If anyone has recommendations for content for intermediate level Vim users, I’d love to hear them!
Hare does not, and will not, support any proprietary operating systems.
Good luck. I doubt it will become a 100 year language without people being able to hack on code on their Macs or Windows machines. But I could be wrong of course :-)
Presumably, if it becomes popular, someone other than ddv will port it to Windows, Mac OS, and so forth. This decision is certainly going to increase friction for adoption, though.
Yeah, as much as I like Drew’s work and some aspects of Hares design, this is probably going to be the reason it doesn’t gain traction. That being said, with WSL, Windows support is probably less important than ever, but the lack of MacOS support will hurt it drastically.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
I really dislike the position that advertisers put users in. I would probably tolerate preroll ads on YouTube, since, as I understand it, real ad views contribute to payouts for video creators.
Unfortunately, many YouTube ads are loud, violent, gross, and/or contain jump-scares, and there’s no real way to avoid seeing these ads without blocking all ads or subscribing to YouTube Premium. I would much rather not watch YouTube than be subjected to this kind of ad, and holding art hostage behind this kind of crap does not incline me towards giving the hostage-taker money. (I’m not opposed to paying for art in general; I am a lifetime member of Nebula for this reason.)
This is a similar position to that in which advertisers put users on other sites; they say “let us show you ads so we can fund our site”, which seems like a fair trade, but it’s omitting the fact that many ads are themselves absolutely vile, and some contain malware.
I can tolerate one. Maybe two. But every time I view YouTube without an adblocker on a machine that isn’t my own I can be assaulted by 3 and even FOUR preroll ads back to back.
I don’t care if its 3, 30 second ads, it is still extremely infuriating.
90% of the time, if a link takes me to YouTube I hit the back button immediately. A couple of days ago, for the first time in a while, I didn’t. This was my experience:
An add for something that I could skip after 5 seconds. No idea what the thing was, they didn’t bother to tell me in the first five seconds.
Realised that the bit that I was interested in was probably a few minutes in, so skipped to around the fifth minute.
Was interrupted by another add immediately (after having watched about three seconds of the video) that required me to wait five seconds.
I got to second 3 before I decided that the content probably wasn’t worth it and closed the tab.
Few things are more emblematic of my Nix experience than the Lobsters front page right now. Side-by-side we have two stories: “Nix Flakes is an experiment that did too much at once…” and “Experimental does not mean unstable” – both referring to Nix flakes, and both presenting completely conflicting ideas about the state of the ecosystem.
I’ve loved making the switch to NixOS. My love for the expression language is not growing, but what it produces is very good. And the difficulty in finding any sort of unified messaging about any topic is absolutely frustrating.
You might be surprised to hear that they are not conflicting ideas, mostly.
We both agree they need to be made stable.
We disagree with the methodology, and how it should happen.
My understanding of the DetSys post is that they want to pick a date, and remove the flag, whether or not it’s ready.
It’s time to call good enough good enough. I’m calling on the Nix team to remove the experimental flag by the end of the year and to thereby open a new chapter in Nix’s history and pave the way for other worthy goals.
My opinion is that we should aim to pick the underlying features, get them ready for use ASAP, until Flakes materializes by all the features being correct and finished.
We both agree Flakes are in use.
(Though I haven’t said it)
Flakes are popular. A lot of publicity and has been made around starting with this experimental feature, and it caught on. Whether or not it’s a good thing is another topic, and not part of my article. And truthfully, I don’t know.
We both agree Flakes as it is has to be supported.
You have to read between the lines, but I believe we should give ourselves the maneuvering space to allow deprecating the erroneous bits.
A stable feature in Nix will have to live on for years, even decades, according to current history. Even early Nix expressions are still intended to be fully supported in the foreseeable future.
If it’s merged in quickly with warts, those warts need to be supported as a feature. While it stays an experimental feature, we can soft-deprecate (warn) whenever, and then drop the feature in a more measured fashion (on a longer timeline).
If I really squint, I can see the common ground between your two positions. I imagine if you two had a discussion face to face, you would find even more common ground and maybe even agree on a strategy for dropping the experimental flag. Although as stand alone pieces, without your commentary here, I was left with the impression that you mostly disagree.
For what it’s worth, I think the strategy that you advocate for is the better one.
Yeah, the main thing is this was not authored as a response to the other post. The only thing that was a response was rushing the publishing to the exact moment. I was about to publish “this week” otherwise.
Picking a date, arbitrary or not, could prevent further bikeshedding as there is a target goal perhaps making folks cooperate for the goal that otherwise could be filibustered indefinitely.
This is my exact feeling. The Nix language is the barrier that discouraged me from accessing NixOS for years after seeing an amazing demo at SCaLE 13x and being totally dazzled. Nixpkgs is awesome, NixOS is awesome, but Nix is full of difficulty and contradictions.
I’m not familiar with the departed, but obituaries of persons who did on-topic things seem to be a common use of the person tag, and my understanding is that “a tag applies” is the definition of “on-topic” here.
Or make it a law that it should be absolutely evident and understandable at a glance how you can pay to 9 out of 10 randomly selected people so if you find yourself in a situation where it’s not evident how you pay, you just turn on your phone’s camera, record a 360 video and go about your business knowing that you can easily dispute whatever fee they throw at you.
This is probably the best answer. No cost to “plot of land for parking” operators, no cost to people. Just record that you couldn’t clearly tell what’s going on and move on with your day.
Maybe? This entire discussion is severely lacking in generality. People are extrapolating wildly from one ranty post in one US city. I could fake another rant saying that parking is free as long as you scan your eyeballs with Worldcoin and it would add as much data…
Plant asphalt-breaking flora at the edges of the lots. Bermudagrass is a good choice if you can obtain it, but standard mint and thyme will do fine for starters. In some jurisdictions, there may be plants which are legal to possess and propagate, but illegal to remove; these are good choices as well.
We’d can start by not forcing people to use an app to begin with.
In Chicago, they have a kiosk next to a row of on-street parking. You just put in your license plate number, and pay with a credit card. No app needed. At the O’Hare airport, short term parking gives you a receipt when you enter the lot. Then you use it to pay when you exit. No app needed.
Right. The way it used to be everywhere, until relatively recently.
A root problem is that, for a lot of systems like this, a 95% solution is far more profitable than a 99% solution. So companies will happily choose the former. Mildly annoying when the product is a luxury, but for many people access to parking is very much a necessity.
So there’s one way to change this: companies providing necessities have to be held to stronger standards. (Unfortunately in the current US political climate that kind of thing seems very hard.)
You’re talking about public (on-street) parking. This post is talking about private parking lots, which exist for the sole purpose of profit maximization.
The way I see it, the issue is that every random company has to do a positively amazing job of handling edge cases, or else people’s lives get disrupted. This is because every interaction we have with the world is, increasingly, monetized, tracked, and exploited. Most of these companies provide little or no value over just letting local or state governments handle things and relying primarily on cash with an asynchronous backup option. Especially when it comes to cars, this option is well-tested in the arena of highway tolls.
To put it succinctly: stop letting capital insert itself everywhere in our society, and roll back what has already happened.
This seems like it’s just some random for-profit Seattle parking lot (cheap way to go long on a patch of downtown real estate while paying your taxes) that, consistent with the minimal effort the owner is putting in generally, has let whatever back-alley knife fight parking payments startup set up shop as long as they can fork over the dough. It is essentially a non-problem. Even odds the lot won’t exist in two years. There are many more worthwhile things to care about instead.
I disagree. This is going on outside Tier-1 and Tier-2 cities with high population density. Small cities and large towns are finally coming to terms with (using Shoup’s title) the high cost of free parking and replacing meters with kiosks (usually good but not necessarily near where you need to park) or apps (my experience is they’re uniformly bad for all the reasons in the link) to put a price on public parking.
One nearby municipality has all of:
Missing or incorrect signs.
Unclear hours. Is it free after 6pm? Sunday? Holidays? This zone? Seasonally?
Very few kiosks.
QR codes and stale QR codes.
Apps acquired by other app companies and replaced.
Contracts ended or changed where the QR code or app doesn’t work or worse takes the payment but is invalid (this only happened to me once).
Even if you’re local and know the quirks you’ll have to deal with it.
It’s not just “some random for-profit Seattle parking lot”. I’ve run into frustrating and near-impossible experiences trying to pay for parking in plenty of places. Often compounded by the fact that I refuse to install an app to pay with.
The other day I was so happy when I had to go to the downtown of (city I live in) and park for a few minutes and I found a spot with an old-fashioned meter that accepted coins.
Establish a simple interoperable protocol standard, that every parking lot must support by law. Then everyone can use a single app everywhere which fits their needs. I mean, this is about paying for parking, how hard can it be?
I mean, this is about paying for parking, how hard can it be?
I think that’s the thing, though. A company comes in to a municipality and says “this is about paying for parking, we make it easy and you no longer have to have 1) A physical presence, 2) Employees on site, or (possibly) 3) Any way to check if people have paid.” They set you up with a few billboards that have the app listed on them, hire some local outfit to drive through parking lots with license plate readers once or twice a day, and you just “keep the profit.” No need to keep cash on hand, make sure large bills get changed into small bills, deal with pounds of change, give A/C to the poor guy sitting in a hut at the entrance, etc.
I write this having recently taken a vacation and run into this exact issue. It appeared the larger community had outsourced all parking to a particular company who has a somewhat mainline app on the Android and Apple stores, and hence was able to get rid of the city workers who had been sitting around doing almost nothing all day as the beach parking lots filled up early and stayed full. I am very particular about what I run on my phone, but my options were leave the parking lot, drive another 30 minutes in hopes that the next beach had a real attendant with the kids upset, or suck it up. I sucked it up and installed long enough to pay and enough other people were that I don’t see them caring if a few people leave on principle of paying by cash, either way the lot was full.
I say all this to point out that some companies are well on their way to having “the” way to pay for parking already and we might not like the outcome.
I get that digital payment for parking space is less labor intensive (the town could also do that themselves, btw), but we can by law force these companies to provide standardized open APIs over which car drivers can pay for their parking spot, why don’t we do that?
I’m always in favor of citizens promoting laws they feel will improve society, so if you feel that way I’d say go for it! I don’t, personally, think that solves the issue of standardizing on someone needing a smart phone (or other electronic device) with them to pay for parking. That to me is the bigger issue than whose app is required (even if I can write my own, until roughly a year ago I was happily on a flip phone with no data plan). So if this law passes, the company adds the API gateway onto their website and… we’re still headed in a direction for required smart device use.
But, again, I strongly support engaging with your local lawmakers and am plenty happy to have such subjects debated publicly to determine if my view is in the minority and am plenty happy to be outvoted if that is the direction it goes.
Shutting off traffic to the small number of remaining HTTP sites is part of the steady and relentless enshitification of the internet. It’s not by coincidence that Google is leading the effort. They are doing a lot of other work towards this end as well, with WEI being only the latest example.
I know many of you believe in the security justifications for this move. I just think that this analysis doesn’t take the big picture into account. Google’s justification looks at only one factor (the risk of an adversary spying on your web traffic or injecting malware). The bigger question is, how big a problem is that TODAY in the real world, and what are we losing by shutting off access to those indy HTTP web sites? A more balanced analysis will look at all of the risks and benefits, and will weigh the overall costs of the change.
Let’s consider the web sites that Google wants to shut off access to. These are indy web sites, typically run by hobbyists and enthusiasts. These sites aren’t your bank or your hospital or e-commerce sites. They are sites are often run by people who prefer to run simple software that they fully understand and are in control of. Eg, <cat-v.org>. Or they are old servers that haven’t been updated in years, serving somebody’s hobby web site. I have all sorts of niche interests, and a strong interest in history, so I visit these kinds of sites all the time. I wish to continue visiting them in the future.
What’s the benefit of removing access to these web sites? There’s a risk of an adversary spying on your web traffic or injecting malware. Well, that used to be an important issue, but all the web sites for which this kind of attack matters went to HTTPS many years ago. Today it’s different.
If I visit http://n-gate.com/software then an adversary can see that I am visiting the “software” page, whereas with HTTPS they could only see I am visiting <n-gate.com>. For the small remaining set of indy HTTP web sites, this distinction doesn’t matter. The domain name says everything you need to know about the type of content I’m accessing.
An adversary could inject advertising or other malware into my browser. Maybe I’m checking out the latest Plan 9 release at 9front.org using a malicious internet cafe wifi. But with so few HTTP sites remaining on the internet, I find it hard to believe it’s still cost effective to build this kind of malicious wifi infrastructure (that doesn’t deliver the playload if you are using HTTPS). Nowadays it seems pretty hypothetical. I am frankly much more concerned about malware in the browser, particularly if the browser is distributed by a surveillance capitalist corporation.
In short, I think that the alleged risk of using HTTP on today’s internet is grossly exaggerated.
On the flip side, what are we losing when Google locks down Chrome in this way, and limits access to the long tail of indy HTTP web sites? This is where I think the bigger problem is.
In short, I think that the alleged risk of using HTTP on today’s internet is grossly exaggerated.
The irony is that there are players spying on everyone using the Web; it’s just that (a) it’s Google and Cloudlfare, and (b) “HTTPS everywhere” is irrelevant to that problem.
I’m at the point with Google now where even though I love the idea of HTTPS everywhere, and I can’t see any problem with it, I’m just assuming there’s malice afoot that I am not smart enough to figure out.
Just because someone is motivated by greed doesn’t automatically mean that their interests don’t align with yours (capitalism would not work at all otherwise!). Google’s primary concern with HTTP was driven by companies like Comcast hijacking HTTP requests to inject their own adverts into customers’ connections. This threatened to undermine Google’s revenue model and, if it became normalised, was an existential threat (imagine if every ISP replaced Google Ads with their own, transparently, at the proxy). Pushing everything to HTTPS makes this impossible. On the plus side for everyone that isn’t Google, ISPs rewriting legitimate traffic to inject ads is probably more evil than anything that Google has done and moving to a world where it’s impossible is a net win.
Preaching to the choir, man :) I’m way down the economic (and social!) liberal scale, and have Rand and Mises on my literal bookshelf a few metres away. (Also Russell, Lowenstein, and Cipolla - because I’m not a fanatic ;) ).
My perspective on Google isn’t that it’s bad that they’re motivated by greed, or that advertising is intrinsically bad.
Rather: many things that I personally value are problematic for a company that makes its money from Web advertising. So Google tends to act quite directly against my values and interests. That doesn’t make them evil per se, but maybe evil in the original hacker sense.
That’s what I mean by “malice afoot”. I see something seemingly very valuable - omnipresent encryption - and get to worrying “okay but what’s their real goal here”. Maybe it really is just that we’re aligned for once and they’re taking out a competitive threat that I also want rid of :)
Uh, it kind of feels like it’s the point when the post repeatedly says that they’re “breaking” the HTTP web or “removing access” or “shutting off”, which they’re not. It really destroys the entire post’s credibility because it is talking about something that is not happening at all.
The vast majority of Web users are utterly unsophisticated, and if the Web browser with the lion’s share of the market tells them a site is unsafe, they’ll avoid it.
That is essentially breaking the HTTP Web. It’s being done by social engineering, sure, but it’s still breakage.
Why do you continue splitting this hair? It doesn’t dent the credibility of the original post at all; it’s only a useful talking point if you’re trying to deflect criticism from Google.
and if the Web browser with the lion’s share of the market tells them a site is unsafe, they’ll avoid it.
OK, and that’s because the sites are unsafe.
That is essentially breaking the HTTP Web.
I think breaking is much too strong of a word. Users can still visit the sites. Being informed about the risks is not “breaking” anything.
Why do you continue splitting this hair?
I don’t consider it splitting hairs, I see this as a massively exaggerated and incorrect view.
It doesn’t dent the credibility of the original post at all;
I disagree completely. The entire premise of the post is based on this idea that access is being “shut off”. It isn’t.
Like, consider this:
“I wish to continue visiting them in the future.”
The poster literally can continue to do so, indefinitely.
“What’s the benefit of removing access to these web sites? “
They aren’t!
Over and over and over again they say that access is being revoked and it isn’t. Over and over again they talk about how these sites won’t be accessible and that is incorrect.
Really? http://cat-v.org/ is unsafe because it’s not served over HTTPS?
Over and over again they talk about how these sites won’t be accessible and that is incorrect.
Technically, sure, but it doesn’t matter.
For the vast majority of Web users, who are insufficiently clueful (or even motivated!) to wade through the browser warnings and settings, they are inaccessible. Any non-clueful user shouldn’t be trying to work out whether a site is being flagged as unsafe simply because it’s served over HTTP, or because it’s loaded with malware or something.
If you don’t know what HTTPS is, why you might use it, or why you might choose not to - an HTTP site is simply broken for you with these changes.
One thought does occur to me, though. Perhaps a better approach would be to straight up disable HTTP POSTS or XHR to non-loopback addresses. Then people could read over HTTP all they wanted, but to actually interact with a site, they’d need to use HTTPS.
Not displaying the website without a full window overlay that you must click through (like a TLS misconfiguration warning) is essentially the same thing, and you’re aware of that.
Just because nobody pulled the plug doesn’t mean it isn’t insidious.
I am reacting harshly because the tone of that comment is easily read as smug and doesn’t attack the actual points GP brings up - even if that’s not how you intended it.
Not displaying the website without a full window overlay that you must click through (like a TLS misconfiguration warning) is essentially the same thing, and you’re aware of that.
I’m definitely not aware of that because I think that’s a wildly false equivalence.
doesn’t attack the actual points GP brings up
Literally every point is predicated on the false equivalence of “warning a user they are visiting a site that is transmitted unsafely” with “blocking HTTP sites”.
On the flip side, what are we losing when Google locks down Chrome in this way, and limits access to the long tail of indy HTTP web sites? This is where I think the bigger problem is.
The last time the computer system at my library went down, I gave the librarian a card with a URL for the PDF I wanted to print (an HTTP link to my personal server). They had previously told me they couldn’t print something from email, and were pleasantly surprised by how easy it was to access and print. This use case could be destroyed by the coming changes, if the librarian is not comfortable overriding a browser security warning.
Or you could just enable HTTPS, and then there’s no way that someone between your web server and the library can inject malware into the PDF and infect the library computers.
Turning low-cost attacks into medium-cost attacks without actually fixing the vulnerability just means the attackers willing to pay the medium cost will be that much more capable and motivated.
The difference between an attack that can be mounted at scale with a cheap off-the-shelf middle box and one that requires a targeted attack to compromise one of the endpoints is not the difference between low and medium cost.
1.) A browser should be safe by default against any kind of “malware” coming from a website, no matter how it got there.
2.) We will come to a point (and are already close) where it becomes easy to stop people from creating a website. We already have the US stopping (usually financial) companies from doing business from people that are on certain lists. This could be extended to being able to effectively publish a website as well. Not great for freedom of information.
We already have the US stopping (usually financial) companies from doing business from people that are on certain lists. This could be extended to being able to effectively publish a website as well.
While this is, indeed, a very scary story, I don’t really see how this particular change by Google brings us any closer to it, especially in a world where LetsEncrypt (i.e. pure domain validation) is the biggest CA around.
Simply put, it conditions users to disregard non-encrypted websites. Of course we can now blame users, but in the end it doesn’t matter: people will click on a link to read about topic X and then their browser gives a warning and they will leave the page immediately, thinking it’s bad/malware or whatever.
And sure, everyone can use LetsEncrypt - except that now there are 2 attack vendors to prevent someone from publishing information on their website: 1.) by taking the server down and 2.) by revoking the certificate.
Babysteps to making it harder to publish opinions and information freely.
That’s funny, this argument is point-by-point corresponds to the one that anti-vaxxers use: ‘Yeah, getting vaccinated was important when no one had immunity, but now that everyone has it, it’s not as important because of herd immunity; and also did you consider the risks? You should look at the pros and cons’
These sites aren’t your bank or your hospital or e-commerce sites.
Yeah, but they can still be injected with malicious content that asks for logins, passwords, etc., even if the original pages don’t. This is not difficult to do. This is why HTTPS is used in the first place.
“At least I can still serve my site over both HTTP and HTTPS.”
The only reason you should open port 80 on your server is to redirect all requests to port 443 and then close the connection on port 80. (Someday, maybe we can drop port 80 altogether.)
Reads like an angry “just don’t do it, idiot”. If this site was really dedicated to convince people to use only https instead of ranting, then why not list some actual arguments against using both at the same time?
How it reads depends on what your mindset is when you approach it. If you read it with an open mindset, it’s sober security advice. If you read it with an adversarial mindset, then it sounds like an attack.
How can we list actual arguments for insecure HTTP when there are no valid ones? Should we list actual arguments for injecting user-supplied inputs into SQL query strings next?
How can we list actual arguments for insecure HTTP when there are no valid ones?
I’m not sure I could be convinced by any argument for unsecured HTTP, but certainly there seems to be little point in offering any such argument to one who asserts in advance that “there are no valid ones”.
I, in contrast, am merely not sure. As I see it, someone who runs a website has four options, none of which I see as clearly best:
Pay for a TLS certificate from a traditional (pre-Let’s Encrypt) certificate authority, an expense that the person may be unable to justify to themself.
Use Let’s Encrypt, which doesn’t support users with sufficiently old devices.
The article you linked to suggests using Firefox Mobile, which manages its own certificate store. So users of old devices still have a pathway to browsing Let’s Encrypt certs.
So, sure, the argument is ‘I don’t want to manage certs, and I don’t want to pay for certs, and I want users of a vanishingly small number of ancient devices’. If you add on enough conditions, you can definitely find some that aren’t met. My assertion still stands though that this is not really a valid reason to allow HTTP. It’s just being ornery for the sake of being ornery.
So you personally have not read about insecure HTTP attacks, so they didn’t happen? I don’t know about you, but personally I don’t think it’s wise to make security decisions based on ‘not reading’ about attacks personally.
Just because bad things happen doesn’t mean that it justifies any kind of action.
I don’t know about you, but personally I don’t think it’s wise to make security decisions based on ‘not reading’ about attacks personally.
Fair enough. Let’s do it based on facts and statistics. Do you have any about damage caused due to HTTP instead of HTTPS, optimally measured in $? Because I don’t know any.
Having not really been paying attention at the time to the various XMPP servers and whatnot, was there the same sort of issue around blacklisting and petty politics that seems to plague the Fediverse?
I’ve avoided getting involved in that scene in large part due to the outsize amount of “we don’t peer with so-and-so”, “omg you peer with this site, well one person on that site said problematic thing so we don’t peer with them anymore”, etc. That would seem to be a bigger threat than Meta.
I dunno, the network has been growing pretty well for 6 or 7 years at this point and while there are occasional issues with admin drama, it hasn’t yet ended up causing a mass exodus, and I don’t think it will. The closest it got was the snouts.online incident, but like everything with OStatus/ActivityPub, that just lead to there being a little island of people who only talk to each other. There similarly is a “sexualized images depicting minors” island, a “Gab/alt-right” island, and a few intentional islands, but that seems like a success of the federated model to me! Those people have their social spaces, and their presence doesn’t cause issues - whether legal or social - for anyone else.
I don’t necessarily think that coupling hosting with moderation in this way is optimal, but it does work, and more so the more instances there are.
The petty politics are pretty specific to a corner of the fediverse (mostly around Mastodon) I think because of the kind of people who came there in the last wave and we they came and some pretty big bugs in Mastodon exacerbate it. I’ve not observed this issue on other federated networks like SMTP, XMPP, IndieWeb, etc to date
Having not really been paying attention at the time to the various XMPP servers and whatnot, was there the same sort of issue around blacklisting and petty politics that seems to plague the Fediverse?
No. There weren’t enough people using XMPP off Google for it to be worth targeting with spam. If you wanted to, it was easy (register a new domain name, deploy a server, and you can spam anyone). Since it was point-to-point, not broadcast, there was no need to block servers for any reason other than spam.
If it had become popular, the lack of spam controls would have been a problem. It’s a bit better than email (you at least need a machine with a DNS record pointing at it to be able to send spam and you can accurately attribute the sender) but not much because the cost of setting up a new server is so low. Some folks were starting to think about building distributed reputation systems. For example, you can build a set of servers that you trust from the ones that people on your server send messages to. You can then share with them the set of servers that they trust and allow messages for the first time from servers on that list, but doing that in a privacy-preserving way is non-trivial.
Like many opinionated articles about design this one has the implicit suffix “…in a web app.”
A modal window would be a better UI in many circumstances. Why make everything else on the screen vanish just to display this detail?
The reasons given against it seem to be technical issues with web browsers and web APIs. To the extent that’s true, my response is “suck it up, buttercup.” The user’s experience is more important than how much work you had to put into implementing it. (I speak as one who had to implement stuff in the hugely awkward classic Mac OS Toolbox.)
Calling a modern web browser a “document browser” is denial at best.
The web was released in 1991, Google maps in 2005 (14 years), it’s now 2023 (18 years). Web browsers have been application platforms for the majority of their existence, and these days a lot of desktop apps are just a browser with different decor (electron, various phone app web view widgets, arguably react native).
This isn’t a bad thing. I can write a webpage once and as long as i color in the lines, it will really run everywhere. Also, I’d much rather have some app devs code running in a tried and tested sandbox than giving their installer root on my box. Especially given that app developers’ incentives are often not aligned with my best interests.
Remember installing apps and getting bonzi buddy or 10 obnoxious toolbars you can’t uninstall? I do, and I don’t miss it. Web as an app platform is what really killed it. It’s time to let go of “browsers are document viewers”
It “isn’t a bad thing” compared to the even worse alternative you present, but that’s not the only alternative. I see no reason to think we would be stuck with that if the dominant application platforms weren’t built on abstractions designed for browsing documents.
I’m not sure how somethings provenance has anything to do with what it’s good at 30 years later (IBM PCs were designed to run boring business applications, yet here we are in a world with cyberpunk 2077 and photoshop) or what a better system than “punch a web address into literally any device and poof, app” would look like. A walled garden app store? 10 people managing packages in different distro repos for one piece of software in varying degrees of not-quite-up-to-date? C was designed to make UNIX, so should we poopooh people who use it on windows or macos? Is Elixir the wrong tool for writing anything other than telephone switches because that’s what the VM was originally made for? Should we have from-scratch rewritten operating systems in the mid 2000s because at the time all consumer OSs had been built for single-processor machines?
No web abstraction built in the last 20 years has been focused on browsing documents, and evolving in generality is what useful software does.
There are good HCI reasons to avoid modal interactions, except when you need to capture the user’s attention for an important decision.
The main one is that modal dialogs block interaction with, and also often a view of, the rest of the screen. If you place the pop-over inline/on a new page, then the user can switch how they like by looking at the rest of the page/the other tab-or-window.
(Edit: link now goes to web page. I had submitted the link as a (now-deleted) Lobsters story, because I felt it was a worthy submisssion in its own right. It turns out that such submissions should nonetheless be comments.)
Agreed, and it’s not even about technical issues. Some of the things are just plain wrong.
The first two items “you can’t bookmark/open in the new tab” - wrong
“Back button confusing” - perhaps, depends if you only use modals for non-bookmarkable things (“your comment has been saved”) or the “details page” from the exaple. If you mix the two, then yes, it might be confusing.
accessibility: like every other thing on your page, this is solely on the app developer, not on the modal.
“webapps should compensate for slow page load” - I don’t see how modals do or do not affect this. Performance is not a related topic - although, you might be using some weird pattern where refreshing a page with a modal is less performant for some reason. I assume the author means open in new tab? Then it’s a performance topic, not UI topic.
“it seems easy” - what? I should not use modals because they’re easy to use?
“looked good in a mockup” - again… what?
I guess this is either a clickbait or a troll article.
Why make everything else on the screen vanish just to display this detail?
This is why I’m pretty well against all modals (web or otherwise) - why make everything else on the screen frustratingly unusable just to display this detail? Want to refer to the information under it? Sometimes works, often too bad. Want to select some text out from under it to copy/paste? Rarely works. Etc.
There’s little more annoying to me than a window dinging when I try to click on it. I’d rather have independent actions that happen when you click ok and go away when you click cancel, but otherwise don’t block your other access.
For web, using a separate page can be a better experience. Modals require JS, lots of JS is slow to load. Modals can often be complete garbage on mobile, especially if they cause the mobile browser to zoom, or parts of them render outside the viewport. I’ve seen loads of modals that could have just been a <form>, and I would have greatly preferred it.
It even has a ::backdrop pseudo-selector, and I think it sets inert on the rest of the body elements so that it doesn’t require a bunch of aria fiddling for proper a11y.
Dialog elements still need JS to be opened as modals. Also, there’s quite a few people still on older browsers who won’t be able to use the element without a polyfill.
The amount of JS needed for a modal is probably less than 1kb, which is less than loading a new page. In terms of UX, losing your entire interaction context, or having the page redraw, is worse than a few ms of latency.
For web, using a separate page can be a better experience.
I think is highly contextual. If you’re linking to one specific page somewhere, then yes. In the context of a single page, then no javascript is likely better then some javascript.
But if this is a part of a web app, then your users are usually not opening a modal in a new tab, then going back to the first one to continue working in the app. Apps like that have javascript cost anyway, but it can still be optimized to look fast.
I still like your comments, waaay better then I do the original article. Yes, they can be tricky to get right on mobile. People use huge libraries just to get some fancy modals. To get them right requires a lot of attention to details.
That’s not to say that using a separate page can’t be a better experience, I just think this is contextual.
I see where the author is coming from; it can be very discouraging to see people flock to shitty, broken, privacy-destroying “products” over “projects” whose main sin is being chronically under-funded. It’s easy to say, “well, people just don’t care about their privacy,” or whatever the VC violation du jour happens to be.
But many people - millions of people - are willing to put up with a little bit of technical difficulty to avoid being spied on, being cheated, being lied to in the way that so many SaaS products cheat us, lie to us, and spy on us. It’s a tradeoff, and there is a correct decision.
There are basically two options; I don’t know which one is true. Either:
a. with sufficient funding, regulation, and luck, we can build “products” to connect people and provide services that aren’t beholden to VCs or posturing oligarchs, quickly and effectively enough that the tradeoff becomes easier to make, or
b. we cannot, and capital remains utterly and unshakably in control of the Internet, and the rest of our daily lives, until civilization undergoes some kind of fundamental catastrophe.
From the author’s bio:
Twi works tirelessly to make sure that technology doesn’t enslave you.
I appreciate that work. It’s vital. Unfortunately, I’m pretty sure we’re going to lose this war unless we can implement drastic, radical regulation against VC-backed tech companies on, frankly, completely unrealistic timelines. We’re already living the cyberpunk dystopia; get ready for the nightmare.
I think the funding discussion needs to start with what libre project are. For all his faults, I think this was best summarized by Drew Devault: Open source means surrendering your monopoly over commercial exploitation . Or perhaps to rephrase, libre software is a communal anti-trust mechanism that functions by stripping the devs of all coercive power.
This is a useful lense to view libre software because plenty of projects are so large that they have a functional monopoly on that particular software stack, which provides some power despite the GPL. Also, “products” (which are UX-scoped) are usually of larger scope than”projects” (which are mechanism-scoped), almost by definition. A lot of libre projects are controversial almost exclusively because they’re Too Big To Fork.
So I think there are two sides to the problem here: decreasing the scope (for the reasons above), and increasing the funding.
Funding-wise, the problem is that 1) running these systems requires money, 2) whoever is providing the money has the power, and 3) users don’t seem to be providing the money.
So, there are three common solutions to this: get the funding from developers (i.e. volunteer-run projects), get the funding from corporate sugar-daddies (either in the form of money or corporate developer contributions), or get the funding from average consumers.
Volunteer-run projects are basically guaranteed to lose to corporations - most devs need a day job, so corporations will largely get their pick of the best devs (the best devs are essentially randomly distributed among the population, so recruiting from the much-smaller pool of only self-funded devs means statistically missing out on the best devs), and typically results in the “free as in free labor” meme.
Corporate funding will, even with the best of intentions by the corporations in questions, tend to result in software that’s more suited to the corporate use-cases than average users - for instance, a home server might primarily need to be simple to set up and maintain by barely-trained users whereas Google’s servers might primarily need to scale well across three continents. This has two effects: first off, it increases the scope, (which is bad per the above paragraphs), and saps the priorities of the project if there ever need to be hard decisions. Second, it gives coercive power to the money-holder, obviously.
So the last option is getting the funding from the average consumer. Honestly, I think this is the only long-term viable solution, but it basically involves rebuilding the culture around voluntarism. As in, if everyone in the FOSS community provides e.g. a consistent $20/month and divvies it up between the projects they either use or plan to use, then that could provide the millions/billions in revenue to actually compete with proprietary ecosystems.
…or it would provide that revenue, if everyone actually paid up. But right now something like 99% of Linuxers et al don’t donate to the libre projects they use. Why is that?
Well for starters, businesses pour huge amounts of effort into turning interested parties into paying customers, whereas plenty of open-source projects literally don’t even have a “donate!” page (and even if they do have such a page, plenty of those are hard to find even when actively looking for them), let alone focusing on the payment UX.
IMO, there needs to be a coordinated make-it-easy-to-pay project (or should I say “product”?), where e.g. every distro has an application preinstalled in the DE that makes it easy to 1) find what projects you most often use (and also what you want to use in future), 2) set up payment, and 3) divvy it up in accordance to what you want to support.
BTW, I hate the term “donate” (and you’ll notice I don’t use it) because it displays and reinforces a mindset that it’s a generous and optional act, as opposed to a necessary part of “bringing about the year of the linux desktop” or “avoiding a cyberpunk dystopia” or such.
if Mastodon ever hopes to grow into the millions of MAU or have any celebrities join
Mastodon is already at millions of MAU, not to mention all the other ActivityPub servers out there. Most people who use ActivityPub don’t really want celebrities to join.
I currently have 5 accounts on ActivityPub servers of various kinds, and with the exception of the overlap between my “professional” and “main” accounts, each one sees a completely different slice of the social archipelago. I have tech and meta discussion on my main accounts a lot, and some news; mostly visual art, spirituality, mental health discussion, and music on my spiritual/mental health advocacy account; personal posts from friends and friends of friends on my private account; and pretty much only microfiction and visual art on my microfiction account.
I bring this up because, while I don’t think that metrics like this are particularly helpful in measuring the success of the network (for example, these metrics don’t include posts from instances that don’t provide this API, and don’t include posts from instances - even moderately-sized ones - that don’t federate widely), I do think people who aren’t embedded in ActivityPub/social archipelago cultures don’t understand the diversity of content that was available even before the Twitter Exodus. There are a lot of people making a lot of art, discussion, commentary, and even reporting via ActivityPub, and most people can find a niche they enjoy.
Aside: I wouldn’t call it a fansite exactly — that site was (and AFAIK still is) run by the creator of Ferris, originally to showcase their proposal of a mascot for Rust. Evidently it succeeded. :)
Edit: Oh, I guess you might have meant that it’s a fansite with respect to Rust, rather than to Ferris.
I’m already happy with Emacs on my VT420, because of the heavily optimized redisplay for lower baud rates and the mitigations for XON/XOFF flow control. But I’m still excited for a new terminal editor that doesn’t require a GPU accelerated terminal (they have played us for utter fools!).
This is a fascinating piece of writing which I agree with in general direction but abhor in specific. In particular, the author takes an entirely regressive tack when discussing the trajectory of the development of computing.
no one was lifting the most centrally important functional objects in our lives into the domain of beauty. The practice of these long-gone artisans had disappeared.
Has it, though? Certainly there are techniques for building beautiful objects that are lost, but modern-day artisans still make beautiful weapons, from swords and armor to bows and firearms. The practice of creating beautiful things is not gone; rather, the Met’s Arms and Armor gallery discussed by the author is the collected, filtered product of nearly every armorer in the world from as early as 1300 BCE to today. Of course the things we see in our everyday lives do not meet that standard.
The author parlays this mistaken idea into a generalized sentiment that there is no such thing as a beautiful computer today, but of course, there absolutely is. Even in the author’s own conception of a beautiful computer as a heavily restricted one, there is a thriving community of artisans building some constrained computers of variousaesthetics. There’s even prior art for beautifully woodworked machines. And that’s just portables - if we get into desktop machines, there’s a whole world of form-over-function (or form-beside-function) designs out there, from the 1970s to the vastdiversity of PC case mods or scratch fabs, including many wooden examples.
The other problem I have is the essentialization of Western culture and Japanese culture. I think it does a disservice to the essential criticism here: we build too much cheap shit that doesn’t look nice or work well, and we should build more thing that do what people want and need, look good, and are easy to maintain. The solution is not to eschew networking, software other than a text editor, and so forth. The solution is to build a society that allows people to spend time producing art without fearing they’ll be out on the street because the thing they’re building isn’t profitable.
The woodworking is indeed beautiful, but as I’ve written before, I prefer the vision of another Japan-opinion-haver (William Gibson) in the direction of coral and turquoise.
Spengler is regarded as a nationalist and an anti-democrat, and he was a prominent member of the Weimar-era Conservative Revolution. Although he had voted for Hitler over Hindenburg in the 1932 German presidential election and the Nazis had viewed him as an ally to provide a “respectable pedigree” to their ideology,[4] he later criticized Nazism due to its excessive racialist elements, which led to him and his work being sidelined in his final years. He saw Benito Mussolini, and entrepreneurial types, like the mining magnate Cecil Rhodes,[5] as examples of the impending Caesars of Western culture…
Thanks a lot for the reference. Now, I don’t see how I should interpret this in regard to the text. Is it a joke? A simple cultural reference? Or something else more cringe?
I see this kind of rhetoric a lot around both LLMs and diffusion models, and it worries me.
To be honest, I’m not convinced I’m not also only doing that. I mean, how do you know for sure your consciousness isn’t basically that? Am I not just an autocomplete system trained on my own stories?
We’re engineers, right? We’re supposed to be metacognative to at least some degree, to understand the reasons we come to the conclusions we come to, to discuss those reasons and document the assumptions we’ve made. Maybe what this conversation needs is some points of comparison.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem and generating words based on (and in concert with) that model, as I (and I believe most humans) do.
LLMs and diffusion models are just trained on an input data set, as opposed to having a lifetime of internal experience based on an incredibly diverse set of stimuli. An artist is not just a collection of exposures to existing art, they are a person with emotions and life experiences that they can express through their art.
Are you a P-zombie in your own mind? Are you devoid of internal experience at all? Have you never taken in any stimuli other than text? Of course not. So, you are not just an autocomplete system based on your own stories.
Note that I am not saying you could not build a perceptron-derived model that has all of these things, but current LLMs and diffusion models eschew that in their fundamental architecture.
First off, I want to say I agree with you in that I hope it was clear in the post that I don’t actually believe LLMs are currently at par with humans, and that human brains probably are doing more than LLMs right now. Also, of course it’s obvious that an LLM has not been trained on the same kind of stimulus and input data.
But the point of my post, said a different way, is that your comment here is making a category error. You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains. What an algorithm feels like on the inside and how it is described externally are two very different things. You can’t say that humans are creating “rich, cyclical, internal models of a problem” and then only point at matrices and training data for the LLM. The right category comparison is to only point at neurons and matrices and training data.
The difference in training data is a good point, but LLMs are already beginning to be trained on multimodal sensory inputs, so it’s not a huge leap to imagine an LLM trained with something roughly equivalent to a human lifetime of diverse stimuli.
I agree with you that my metacognitive experience is a rich, cyclical, internal model and that I’m generating and thinking through visualizations and arguments based on that model. I certainly don’t think I’m a P-zombie. But you are not substantiating which neuron substructures are causing that that the machines are lacking. Do you know for sure the LLMs don’t also feel like they have rich, cyclical, internal models? How can you say? What is the thing human brains have, using external descriptions only (not internal, subjective, what it feels like descriptions), that the machines don’t have?
It’s not central to your point, and you can probably tell from the “this seems obviously wrong, so I must not be missing any important details” part, but what you said about philosophy of mind in the first three paragraphs is inaccurate. Cartesian dualism was an innovation in C17 and the prior prevailing theories were somewhere between monism and substance dualism. Hylomorphic dualism is probably closer to modern functionalism about mind.
Ironically, this reply:
You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains.
is an untenable view on a materialist view of consciousness, and a distinction between “internal” descriptions of experience and “external” descriptions of the experiencing agent is one of the major reasons in favor of non-physical views of consciousness. The eliminative materialist would simply deny that there are such things as rich internal models and suchlike.
I’m clearly out of my depth with respect to philosophy or I would have attempted to be more accurate, heh, even considering my flippant attitude towards Descartes.
So, forgive the junior question, but are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view? What would materialism say about the parable of the blind men and the elephant?
are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view?
No, the critical distinction you made above is between the internal and the external. When a human experiences red, there is a flurry of neuronal activity (the objective component) and the human has the experience of redness (the subjective component). The hard problem of consciousness is explaining why there is a subjective component at all in addition to the objective component. People who think that LLMs aren’t “conscious” are denying that LLMs have that subjectivity. The idea of a P-zombie is someone who has all the neuronal activity, but no internal experience of redness, i.e. someone who has all the objective correlates of consciousness, but there’s “nobody home”, so to speak.
For the blind men and the elephant, the question isn’t the difference between the men having different experiences, it’s between the different components of the experience each is having.
I don’t think it is a category error. We can observe at least part of the mechanism by which our brains form the cognitive models we use to produce speech. When I say “rich, cyclical, internal model”, I’m being specific; our brains think about things in a way that is
rich, in that it unifies more than one pathway for processing information; we draw from an incredibly large array of stimuli, our endocrine wash, our memories, and our brain’s innate structures, among other things
cyclical, in that we do not merely feed-forward but continuously update the status of all neurons and connections
internal, in that we don’t have to convert the outputs of these processes into a non-native representation to feed them back into the inputs
LLMs do not do this; as far as I know, no artificial neural network does and no existing ANN architecture can. We can observe, externally, that the processes our brain uses are dramatically different than those within an LLM, or any ANN in these ways.
okay, i didn’t understand the specificity you meant. however:
rich, in that it unifies more than one pathway for processing information; we draw from an incredibly large array of stimuli, our endocrine wash, our memories, and our brain’s innate structures, among other things
isn’t this similar to what GPT4’s multimodal training claims?
cyclical, in that we do not merely feed-forward but continuously update the status of all neurons and connections
isn’t this like back propagation?
internal, in that we don’t have to convert the outputs of these processes into a non-native representation to feed them back into the inputs
isn’t there a whole field dedicated to trying to get ANN explainability, precisely because there are internal representations of learned concepts?
I’m not familiar with the details of multimodal training, thank you for bringing that up.
As to the other two, I specifically don’t mean backpropagation in the sense that it’s typically used in ML models, because it’s not continuous and not cyclic. In a human brain, cycles of neurons are very common and very important (see some discussion here), and weight updating happens during thought rather than as a separate phase.
Similarly, while it’s true that ANNs have internal representations of concepts, they specifically cannot feed back those internal representations; even in use cases like chat interfaces where large amounts of output text are part of the inbound context for the next word, those concepts have to be flattened to language (one word at a time!) and then reinflated into the latent space before they can be re-used.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem
Generating rich models of a problem can be phrased as a text completion task.
they are a person with emotions and life experiences that they can express through their art.
To the degree that they are expressed through their art, LLMs can learn them. They won’t be “authentic”, but that doesn’t mean they can’t be plausibly reproduced.
- I don’t disagree. Text models don’t do these things by default, on their own. But that’s less of a skill hindrance than it sounds, as it seems to turn out lots of human abilities don’t need these capabilities. Furthermore, approaches like chain of thought indicate that LLMs can pick up substitutes for these abilities. Third, I suspect that internal experience (consciousness) in humans is mostly useful as an aid to learning; since LLMs already have reinforcement learning they may simply not need it.
I’m on call this week, and I’m working on finishing some things up before I take some PTO next week to help my partner recover from surgery. I’ve got a ton of miscellaneous open source nonsense to catch up on, and I’m also working on the longest solo writing project I’ve ever done. There’s definitely some adjusting to do!
For the first time in a pretty long time, I have both the time and the motivation to pursue some “extracurricular” (that is, non-work-related and non-family-related) projects, and I’m trying to pick a good mix between learning new technical skills, language skills, and hobbies including TTRPGs and music.
If anyone has recommendations for content for intermediate level Vim users, I’d love to hear them!
Good luck. I doubt it will become a 100 year language without people being able to hack on code on their Macs or Windows machines. But I could be wrong of course :-)
Presumably, if it becomes popular, someone other than ddv will port it to Windows, Mac OS, and so forth. This decision is certainly going to increase friction for adoption, though.
There are thankless jobs, and then there’s being the maintainer of a hostile fork of a ddv project…
Now I’m thinking about creating a fork of SourceHut that soft-wraps text in mailing lists.
Presumably, because it’s a ddv project, it won’t become popular. All I ever read about the guy is friction with his opinions.
Drew has many other projects which have achieved significant traction, most notably Sourcehut and Sway/wlroots.
Except they say: we don’t want those changes. We will never upstream them.
Nobody will burn their fingers in this. What would be the point of being an unwanted port that the original project doesn’t want.
Well, Darwin is open source :-)
Will it work on reactos? And then just happen to work on windows.
Yeah, as much as I like Drew’s work and some aspects of Hares design, this is probably going to be the reason it doesn’t gain traction. That being said, with WSL, Windows support is probably less important than ever, but the lack of MacOS support will hurt it drastically.
I think the hope is that people stop using MacOS and Windows within 100 years.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Relevant XKCD: https://xkcd.com/651/
Interesting. What exactly motivates them then?
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
Start with Oregon senator Ron Wyden.
Power.
thank you so much for this!
I really dislike the position that advertisers put users in. I would probably tolerate preroll ads on YouTube, since, as I understand it, real ad views contribute to payouts for video creators.
Unfortunately, many YouTube ads are loud, violent, gross, and/or contain jump-scares, and there’s no real way to avoid seeing these ads without blocking all ads or subscribing to YouTube Premium. I would much rather not watch YouTube than be subjected to this kind of ad, and holding art hostage behind this kind of crap does not incline me towards giving the hostage-taker money. (I’m not opposed to paying for art in general; I am a lifetime member of Nebula for this reason.)
This is a similar position to that in which advertisers put users on other sites; they say “let us show you ads so we can fund our site”, which seems like a fair trade, but it’s omitting the fact that many ads are themselves absolutely vile, and some contain malware.
I can tolerate one. Maybe two. But every time I view YouTube without an adblocker on a machine that isn’t my own I can be assaulted by 3 and even FOUR preroll ads back to back. I don’t care if its 3, 30 second ads, it is still extremely infuriating.
90% of the time, if a link takes me to YouTube I hit the back button immediately. A couple of days ago, for the first time in a while, I didn’t. This was my experience:
Yeah, that sounds awful; I don’t think I would tolerate that.
Few things are more emblematic of my Nix experience than the Lobsters front page right now. Side-by-side we have two stories: “Nix Flakes is an experiment that did too much at once…” and “Experimental does not mean unstable” – both referring to Nix flakes, and both presenting completely conflicting ideas about the state of the ecosystem.
I’ve loved making the switch to NixOS. My love for the expression language is not growing, but what it produces is very good. And the difficulty in finding any sort of unified messaging about any topic is absolutely frustrating.
You might be surprised to hear that they are not conflicting ideas, mostly.
We both agree they need to be made stable.
We disagree with the methodology, and how it should happen.
My understanding of the DetSys post is that they want to pick a date, and remove the flag, whether or not it’s ready.
My opinion is that we should aim to pick the underlying features, get them ready for use ASAP, until Flakes materializes by all the features being correct and finished.
We both agree Flakes are in use.
(Though I haven’t said it)
Flakes are popular. A lot of publicity and has been made around starting with this experimental feature, and it caught on. Whether or not it’s a good thing is another topic, and not part of my article. And truthfully, I don’t know.
We both agree Flakes as it is has to be supported.
You have to read between the lines, but I believe we should give ourselves the maneuvering space to allow deprecating the erroneous bits.
A stable feature in Nix will have to live on for years, even decades, according to current history. Even early Nix expressions are still intended to be fully supported in the foreseeable future.
If it’s merged in quickly with warts, those warts need to be supported as a feature. While it stays an experimental feature, we can soft-deprecate (warn) whenever, and then drop the feature in a more measured fashion (on a longer timeline).
If I really squint, I can see the common ground between your two positions. I imagine if you two had a discussion face to face, you would find even more common ground and maybe even agree on a strategy for dropping the experimental flag. Although as stand alone pieces, without your commentary here, I was left with the impression that you mostly disagree.
For what it’s worth, I think the strategy that you advocate for is the better one.
Yeah, the main thing is this was not authored as a response to the other post. The only thing that was a response was rushing the publishing to the exact moment. I was about to publish “this week” otherwise.
Picking a date, arbitrary or not, could prevent further bikeshedding as there is a target goal perhaps making folks cooperate for the goal that otherwise could be filibustered indefinitely.
This is my exact feeling. The Nix language is the barrier that discouraged me from accessing NixOS for years after seeing an amazing demo at SCaLE 13x and being totally dazzled. Nixpkgs is awesome, NixOS is awesome, but Nix is full of difficulty and contradictions.
It’s always sad to hear of someone’s death, and my heart goes out to those who knew Kris.
How is this on topic though?
Kris was a prolific hacker, streamed programming and ops work on Twitch, and ran a Mastodon instance which many Lobsters users are a part of.
I’m not familiar with the departed, but obituaries of persons who did on-topic things seem to be a common use of the
person
tag, and my understanding is that “a tag applies” is the definition of “on-topic” here.How do we change it?
Make it a law that paid parking lots have to accept payment by cash?
“To pay with cash please buy a single-use code in one of the authorized points” (nearest one 2 districts away, opening tomorrow morning).
I agree with the spirit of what you said though.
You are experienced with the dark patterns, sir
Or make it a law that it should be absolutely evident and understandable at a glance how you can pay to 9 out of 10 randomly selected people so if you find yourself in a situation where it’s not evident how you pay, you just turn on your phone’s camera, record a 360 video and go about your business knowing that you can easily dispute whatever fee they throw at you.
This is probably the best answer. No cost to “plot of land for parking” operators, no cost to people. Just record that you couldn’t clearly tell what’s going on and move on with your day.
Ah yes, big cash boxes under unmotivated observation, sitting out in public. That won’t raise the cost of parking.
Has parking become cheaper when those boxes were replaced with apps?
Maybe? This entire discussion is severely lacking in generality. People are extrapolating wildly from one ranty post in one US city. I could fake another rant saying that parking is free as long as you scan your eyeballs with Worldcoin and it would add as much data…
Plant asphalt-breaking flora at the edges of the lots. Bermudagrass is a good choice if you can obtain it, but standard mint and thyme will do fine for starters. In some jurisdictions, there may be plants which are legal to possess and propagate, but illegal to remove; these are good choices as well.
We’d can start by not forcing people to use an app to begin with.
In Chicago, they have a kiosk next to a row of on-street parking. You just put in your license plate number, and pay with a credit card. No app needed. At the O’Hare airport, short term parking gives you a receipt when you enter the lot. Then you use it to pay when you exit. No app needed.
Right. The way it used to be everywhere, until relatively recently.
A root problem is that, for a lot of systems like this, a 95% solution is far more profitable than a 99% solution. So companies will happily choose the former. Mildly annoying when the product is a luxury, but for many people access to parking is very much a necessity.
So there’s one way to change this: companies providing necessities have to be held to stronger standards. (Unfortunately in the current US political climate that kind of thing seems very hard.)
You’re talking about public (on-street) parking. This post is talking about private parking lots, which exist for the sole purpose of profit maximization.
The cities could pass laws to regulate the payment methods. Parking lots that don’t confirm can be shut down.
Depending on the city, getting such regulations passed may be difficult though.
The way I see it, the issue is that every random company has to do a positively amazing job of handling edge cases, or else people’s lives get disrupted. This is because every interaction we have with the world is, increasingly, monetized, tracked, and exploited. Most of these companies provide little or no value over just letting local or state governments handle things and relying primarily on cash with an asynchronous backup option. Especially when it comes to cars, this option is well-tested in the arena of highway tolls.
To put it succinctly: stop letting capital insert itself everywhere in our society, and roll back what has already happened.
First do no harm. Don’t build stuff like this.
Learn and follow best practices for device independence and accessibility. Contrast. Alt text. No here links. No text rendered with images.
Those are things we can and should do.
But likely things like this won’t change until there are law suits and such. Sigh.
This seems like it’s just some random for-profit Seattle parking lot (cheap way to go long on a patch of downtown real estate while paying your taxes) that, consistent with the minimal effort the owner is putting in generally, has let whatever back-alley knife fight parking payments startup set up shop as long as they can fork over the dough. It is essentially a non-problem. Even odds the lot won’t exist in two years. There are many more worthwhile things to care about instead.
I disagree. This is going on outside Tier-1 and Tier-2 cities with high population density. Small cities and large towns are finally coming to terms with (using Shoup’s title) the high cost of free parking and replacing meters with kiosks (usually good but not necessarily near where you need to park) or apps (my experience is they’re uniformly bad for all the reasons in the link) to put a price on public parking.
One nearby municipality has all of:
Even if you’re local and know the quirks you’ll have to deal with it.
It’s not just “some random for-profit Seattle parking lot”. I’ve run into frustrating and near-impossible experiences trying to pay for parking in plenty of places. Often compounded by the fact that I refuse to install an app to pay with.
The other day I was so happy when I had to go to the downtown of (city I live in) and park for a few minutes and I found a spot with an old-fashioned meter that accepted coins.
History does not bear you out.
What?
Establish a simple interoperable protocol standard, that every parking lot must support by law. Then everyone can use a single app everywhere which fits their needs. I mean, this is about paying for parking, how hard can it be?
I think that’s the thing, though. A company comes in to a municipality and says “this is about paying for parking, we make it easy and you no longer have to have 1) A physical presence, 2) Employees on site, or (possibly) 3) Any way to check if people have paid.” They set you up with a few billboards that have the app listed on them, hire some local outfit to drive through parking lots with license plate readers once or twice a day, and you just “keep the profit.” No need to keep cash on hand, make sure large bills get changed into small bills, deal with pounds of change, give A/C to the poor guy sitting in a hut at the entrance, etc.
I write this having recently taken a vacation and run into this exact issue. It appeared the larger community had outsourced all parking to a particular company who has a somewhat mainline app on the Android and Apple stores, and hence was able to get rid of the city workers who had been sitting around doing almost nothing all day as the beach parking lots filled up early and stayed full. I am very particular about what I run on my phone, but my options were leave the parking lot, drive another 30 minutes in hopes that the next beach had a real attendant with the kids upset, or suck it up. I sucked it up and installed long enough to pay and enough other people were that I don’t see them caring if a few people leave on principle of paying by cash, either way the lot was full.
I say all this to point out that some companies are well on their way to having “the” way to pay for parking already and we might not like the outcome.
I get that digital payment for parking space is less labor intensive (the town could also do that themselves, btw), but we can by law force these companies to provide standardized open APIs over which car drivers can pay for their parking spot, why don’t we do that?
I’m always in favor of citizens promoting laws they feel will improve society, so if you feel that way I’d say go for it! I don’t, personally, think that solves the issue of standardizing on someone needing a smart phone (or other electronic device) with them to pay for parking. That to me is the bigger issue than whose app is required (even if I can write my own, until roughly a year ago I was happily on a flip phone with no data plan). So if this law passes, the company adds the API gateway onto their website and… we’re still headed in a direction for required smart device use.
But, again, I strongly support engaging with your local lawmakers and am plenty happy to have such subjects debated publicly to determine if my view is in the minority and am plenty happy to be outvoted if that is the direction it goes.
Shutting off traffic to the small number of remaining HTTP sites is part of the steady and relentless enshitification of the internet. It’s not by coincidence that Google is leading the effort. They are doing a lot of other work towards this end as well, with WEI being only the latest example.
I know many of you believe in the security justifications for this move. I just think that this analysis doesn’t take the big picture into account. Google’s justification looks at only one factor (the risk of an adversary spying on your web traffic or injecting malware). The bigger question is, how big a problem is that TODAY in the real world, and what are we losing by shutting off access to those indy HTTP web sites? A more balanced analysis will look at all of the risks and benefits, and will weigh the overall costs of the change.
Let’s consider the web sites that Google wants to shut off access to. These are indy web sites, typically run by hobbyists and enthusiasts. These sites aren’t your bank or your hospital or e-commerce sites. They are sites are often run by people who prefer to run simple software that they fully understand and are in control of. Eg, <cat-v.org>. Or they are old servers that haven’t been updated in years, serving somebody’s hobby web site. I have all sorts of niche interests, and a strong interest in history, so I visit these kinds of sites all the time. I wish to continue visiting them in the future.
What’s the benefit of removing access to these web sites? There’s a risk of an adversary spying on your web traffic or injecting malware. Well, that used to be an important issue, but all the web sites for which this kind of attack matters went to HTTPS many years ago. Today it’s different.
In short, I think that the alleged risk of using HTTP on today’s internet is grossly exaggerated.
On the flip side, what are we losing when Google locks down Chrome in this way, and limits access to the long tail of indy HTTP web sites? This is where I think the bigger problem is.
The irony is that there are players spying on everyone using the Web; it’s just that (a) it’s Google and Cloudlfare, and (b) “HTTPS everywhere” is irrelevant to that problem.
I’m at the point with Google now where even though I love the idea of HTTPS everywhere, and I can’t see any problem with it, I’m just assuming there’s malice afoot that I am not smart enough to figure out.
Just because someone is motivated by greed doesn’t automatically mean that their interests don’t align with yours (capitalism would not work at all otherwise!). Google’s primary concern with HTTP was driven by companies like Comcast hijacking HTTP requests to inject their own adverts into customers’ connections. This threatened to undermine Google’s revenue model and, if it became normalised, was an existential threat (imagine if every ISP replaced Google Ads with their own, transparently, at the proxy). Pushing everything to HTTPS makes this impossible. On the plus side for everyone that isn’t Google, ISPs rewriting legitimate traffic to inject ads is probably more evil than anything that Google has done and moving to a world where it’s impossible is a net win.
Let’s not get carried away…
https://theintercept.com/2018/03/06/google-is-quietly-providing-ai-technology-for-drone-strike-targeting-project/
Preaching to the choir, man :) I’m way down the economic (and social!) liberal scale, and have Rand and Mises on my literal bookshelf a few metres away. (Also Russell, Lowenstein, and Cipolla - because I’m not a fanatic ;) ).
My perspective on Google isn’t that it’s bad that they’re motivated by greed, or that advertising is intrinsically bad.
Rather: many things that I personally value are problematic for a company that makes its money from Web advertising. So Google tends to act quite directly against my values and interests. That doesn’t make them evil per se, but maybe evil in the original hacker sense.
That’s what I mean by “malice afoot”. I see something seemingly very valuable - omnipresent encryption - and get to worrying “okay but what’s their real goal here”. Maybe it really is just that we’re aligned for once and they’re taking out a competitive threat that I also want rid of :)
No one is doing that.
Not literally, no, but that’s not the point and you know it.
Uh, it kind of feels like it’s the point when the post repeatedly says that they’re “breaking” the HTTP web or “removing access” or “shutting off”, which they’re not. It really destroys the entire post’s credibility because it is talking about something that is not happening at all.
The vast majority of Web users are utterly unsophisticated, and if the Web browser with the lion’s share of the market tells them a site is unsafe, they’ll avoid it.
That is essentially breaking the HTTP Web. It’s being done by social engineering, sure, but it’s still breakage.
Why do you continue splitting this hair? It doesn’t dent the credibility of the original post at all; it’s only a useful talking point if you’re trying to deflect criticism from Google.
OK, and that’s because the sites are unsafe.
I think breaking is much too strong of a word. Users can still visit the sites. Being informed about the risks is not “breaking” anything.
I don’t consider it splitting hairs, I see this as a massively exaggerated and incorrect view.
I disagree completely. The entire premise of the post is based on this idea that access is being “shut off”. It isn’t.
Like, consider this:
“I wish to continue visiting them in the future.”
The poster literally can continue to do so, indefinitely.
“What’s the benefit of removing access to these web sites? “
They aren’t!
Over and over and over again they say that access is being revoked and it isn’t. Over and over again they talk about how these sites won’t be accessible and that is incorrect.
Really? http://cat-v.org/ is unsafe because it’s not served over HTTPS?
Technically, sure, but it doesn’t matter.
For the vast majority of Web users, who are insufficiently clueful (or even motivated!) to wade through the browser warnings and settings, they are inaccessible. Any non-clueful user shouldn’t be trying to work out whether a site is being flagged as unsafe simply because it’s served over HTTP, or because it’s loaded with malware or something.
If you don’t know what HTTPS is, why you might use it, or why you might choose not to - an HTTP site is simply broken for you with these changes.
One thought does occur to me, though. Perhaps a better approach would be to straight up disable HTTP POSTS or XHR to non-loopback addresses. Then people could read over HTTP all they wanted, but to actually interact with a site, they’d need to use HTTPS.
Not displaying the website without a full window overlay that you must click through (like a TLS misconfiguration warning) is essentially the same thing, and you’re aware of that.
Just because nobody pulled the plug doesn’t mean it isn’t insidious.
I am reacting harshly because the tone of that comment is easily read as smug and doesn’t attack the actual points GP brings up - even if that’s not how you intended it.
I’m definitely not aware of that because I think that’s a wildly false equivalence.
Literally every point is predicated on the false equivalence of “warning a user they are visiting a site that is transmitted unsafely” with “blocking HTTP sites”.
The last time the computer system at my library went down, I gave the librarian a card with a URL for the PDF I wanted to print (an HTTP link to my personal server). They had previously told me they couldn’t print something from email, and were pleasantly surprised by how easy it was to access and print. This use case could be destroyed by the coming changes, if the librarian is not comfortable overriding a browser security warning.
Or you could just enable HTTPS, and then there’s no way that someone between your web server and the library can inject malware into the PDF and infect the library computers.
Turning low-cost attacks into medium-cost attacks without actually fixing the vulnerability just means the attackers willing to pay the medium cost will be that much more capable and motivated.
The difference between an attack that can be mounted at scale with a cheap off-the-shelf middle box and one that requires a targeted attack to compromise one of the endpoints is not the difference between low and medium cost.
If those aren’t low and medium cost attacks, I don’t know what are.
That’s very clear.
It’s not a medium-cost attack. It’s an extremely high-cost and low-payoff attack with HTTPS. Otherwise banks would be using something else.
The medium cost attack is having someone walk into the library and hand the librarian a URL.
Totally agree. I want to add two things:
1.) A browser should be safe by default against any kind of “malware” coming from a website, no matter how it got there.
2.) We will come to a point (and are already close) where it becomes easy to stop people from creating a website. We already have the US stopping (usually financial) companies from doing business from people that are on certain lists. This could be extended to being able to effectively publish a website as well. Not great for freedom of information.
This is very concerning.
While this is, indeed, a very scary story, I don’t really see how this particular change by Google brings us any closer to it, especially in a world where LetsEncrypt (i.e. pure domain validation) is the biggest CA around.
Simply put, it conditions users to disregard non-encrypted websites. Of course we can now blame users, but in the end it doesn’t matter: people will click on a link to read about topic X and then their browser gives a warning and they will leave the page immediately, thinking it’s bad/malware or whatever.
And sure, everyone can use LetsEncrypt - except that now there are 2 attack vendors to prevent someone from publishing information on their website: 1.) by taking the server down and 2.) by revoking the certificate.
Babysteps to making it harder to publish opinions and information freely.
That’s funny, this argument is point-by-point corresponds to the one that anti-vaxxers use: ‘Yeah, getting vaccinated was important when no one had immunity, but now that everyone has it, it’s not as important because of herd immunity; and also did you consider the risks? You should look at the pros and cons’
Yeah, but they can still be injected with malicious content that asks for logins, passwords, etc., even if the original pages don’t. This is not difficult to do. This is why HTTPS is used in the first place.
Fyi https://doesmysiteneedhttps.com/
Also, from your link:
Reads like an angry “just don’t do it, idiot”. If this site was really dedicated to convince people to use only https instead of ranting, then why not list some actual arguments against using both at the same time?
How it reads depends on what your mindset is when you approach it. If you read it with an open mindset, it’s sober security advice. If you read it with an adversarial mindset, then it sounds like an attack.
How can we list actual arguments for insecure HTTP when there are no valid ones? Should we list actual arguments for injecting user-supplied inputs into SQL query strings next?
It can be sober security advice but still come across as a rant or at least childish.
Please read my post again with an open mindset yourself. :-)
I did not say that the website should list arguments for HTTP. I said it should list arguments against providing both HTTP and HTTPS.
I’m not sure I could be convinced by any argument for unsecured HTTP, but certainly there seems to be little point in offering any such argument to one who asserts in advance that “there are no valid ones”.
I, in contrast, am merely not sure. As I see it, someone who runs a website has four options, none of which I see as clearly best:
Maybe (4) is better than (3), but I’m not sure.
The article you linked to suggests using Firefox Mobile, which manages its own certificate store. So users of old devices still have a pathway to browsing Let’s Encrypt certs.
So, sure, the argument is ‘I don’t want to manage certs, and I don’t want to pay for certs, and I want users of a vanishingly small number of ancient devices’. If you add on enough conditions, you can definitely find some that aren’t met. My assertion still stands though that this is not really a valid reason to allow HTTP. It’s just being ornery for the sake of being ornery.
Even in pre-https days, how much of a risk was that? I used https everywhere but most people didn’t and I never read about any attacks like that.
On top, if connection from/to the ISP is encrypted, then the attack vendor is already reduced dramatically.
The argument “look at pros and cons” is a valid one.
So you personally have not read about insecure HTTP attacks, so they didn’t happen? I don’t know about you, but personally I don’t think it’s wise to make security decisions based on ‘not reading’ about attacks personally.
Just because bad things happen doesn’t mean that it justifies any kind of action.
Fair enough. Let’s do it based on facts and statistics. Do you have any about damage caused due to HTTP instead of HTTPS, optimally measured in $? Because I don’t know any.
Having not really been paying attention at the time to the various XMPP servers and whatnot, was there the same sort of issue around blacklisting and petty politics that seems to plague the Fediverse?
I’ve avoided getting involved in that scene in large part due to the outsize amount of “we don’t peer with so-and-so”, “omg you peer with this site, well one person on that site said problematic thing so we don’t peer with them anymore”, etc. That would seem to be a bigger threat than Meta.
I dunno, the network has been growing pretty well for 6 or 7 years at this point and while there are occasional issues with admin drama, it hasn’t yet ended up causing a mass exodus, and I don’t think it will. The closest it got was the snouts.online incident, but like everything with OStatus/ActivityPub, that just lead to there being a little island of people who only talk to each other. There similarly is a “sexualized images depicting minors” island, a “Gab/alt-right” island, and a few intentional islands, but that seems like a success of the federated model to me! Those people have their social spaces, and their presence doesn’t cause issues - whether legal or social - for anyone else.
I don’t necessarily think that coupling hosting with moderation in this way is optimal, but it does work, and more so the more instances there are.
The petty politics are pretty specific to a corner of the fediverse (mostly around Mastodon) I think because of the kind of people who came there in the last wave and we they came and some pretty big bugs in Mastodon exacerbate it. I’ve not observed this issue on other federated networks like SMTP, XMPP, IndieWeb, etc to date
No. There weren’t enough people using XMPP off Google for it to be worth targeting with spam. If you wanted to, it was easy (register a new domain name, deploy a server, and you can spam anyone). Since it was point-to-point, not broadcast, there was no need to block servers for any reason other than spam.
If it had become popular, the lack of spam controls would have been a problem. It’s a bit better than email (you at least need a machine with a DNS record pointing at it to be able to send spam and you can accurately attribute the sender) but not much because the cost of setting up a new server is so low. Some folks were starting to think about building distributed reputation systems. For example, you can build a set of servers that you trust from the ones that people on your server send messages to. You can then share with them the set of servers that they trust and allow messages for the first time from servers on that list, but doing that in a privacy-preserving way is non-trivial.
Direct action gets the goods. Let’s hope the strike holds. And I guess not going on SO is solidarity now 😅
Like many opinionated articles about design this one has the implicit suffix “…in a web app.”
A modal window would be a better UI in many circumstances. Why make everything else on the screen vanish just to display this detail?
The reasons given against it seem to be technical issues with web browsers and web APIs. To the extent that’s true, my response is “suck it up, buttercup.” The user’s experience is more important than how much work you had to put into implementing it. (I speak as one who had to implement stuff in the hugely awkward classic Mac OS Toolbox.)
Alternately, maybe we shouldn’t be building applications using a document browser.
Calling a modern web browser a “document browser” is denial at best.
The web was released in 1991, Google maps in 2005 (14 years), it’s now 2023 (18 years). Web browsers have been application platforms for the majority of their existence, and these days a lot of desktop apps are just a browser with different decor (electron, various phone app web view widgets, arguably react native).
This isn’t a bad thing. I can write a webpage once and as long as i color in the lines, it will really run everywhere. Also, I’d much rather have some app devs code running in a tried and tested sandbox than giving their installer root on my box. Especially given that app developers’ incentives are often not aligned with my best interests.
Remember installing apps and getting bonzi buddy or 10 obnoxious toolbars you can’t uninstall? I do, and I don’t miss it. Web as an app platform is what really killed it. It’s time to let go of “browsers are document viewers”
It “isn’t a bad thing” compared to the even worse alternative you present, but that’s not the only alternative. I see no reason to think we would be stuck with that if the dominant application platforms weren’t built on abstractions designed for browsing documents.
I’m not sure how somethings provenance has anything to do with what it’s good at 30 years later (IBM PCs were designed to run boring business applications, yet here we are in a world with cyberpunk 2077 and photoshop) or what a better system than “punch a web address into literally any device and poof, app” would look like. A walled garden app store? 10 people managing packages in different distro repos for one piece of software in varying degrees of not-quite-up-to-date? C was designed to make UNIX, so should we poopooh people who use it on windows or macos? Is Elixir the wrong tool for writing anything other than telephone switches because that’s what the VM was originally made for? Should we have from-scratch rewritten operating systems in the mid 2000s because at the time all consumer OSs had been built for single-processor machines?
No web abstraction built in the last 20 years has been focused on browsing documents, and evolving in generality is what useful software does.
This is one better alternative; another would be like the web but built from the ground up to handle interaction rather than documents.
Sometimes abstractions built for one purpose are suitable for another, and sometimes they aren’t.
no.
:)
Thank you for this comment.
King Ferdinand and Queen Isabella would like to point out that ship’s already sailed some years ago…
There are good HCI reasons to avoid modal interactions, except when you need to capture the user’s attention for an important decision.
The main one is that modal dialogs block interaction with, and also often a view of, the rest of the screen. If you place the pop-over inline/on a new page, then the user can switch how they like by looking at the rest of the page/the other tab-or-window.
(Edit: link now goes to web page. I had submitted the link as a (now-deleted) Lobsters story, because I felt it was a worthy submisssion in its own right. It turns out that such submissions should nonetheless be comments.)
Agreed, and it’s not even about technical issues. Some of the things are just plain wrong.
I guess this is either a clickbait or a troll article.
This is why I’m pretty well against all modals (web or otherwise) - why make everything else on the screen frustratingly unusable just to display this detail? Want to refer to the information under it? Sometimes works, often too bad. Want to select some text out from under it to copy/paste? Rarely works. Etc.
There’s little more annoying to me than a window dinging when I try to click on it. I’d rather have independent actions that happen when you click ok and go away when you click cancel, but otherwise don’t block your other access.
For web, using a separate page can be a better experience. Modals require JS, lots of JS is slow to load. Modals can often be complete garbage on mobile, especially if they cause the mobile browser to zoom, or parts of them render outside the viewport. I’ve seen loads of modals that could have just been a
<form>
, and I would have greatly preferred it.Not anymore they don’t:
https://caniuse.com/dialog
It even has a
::backdrop
pseudo-selector, and I think it setsinert
on the rest of the body elements so that it doesn’t require a bunch of aria fiddling for proper a11y.EDIT: They may also get support for back buttons!
Dialog elements still need JS to be opened as modals. Also, there’s quite a few people still on older browsers who won’t be able to use the element without a polyfill.
The amount of JS needed for a modal is probably less than 1kb, which is less than loading a new page. In terms of UX, losing your entire interaction context, or having the page redraw, is worse than a few ms of latency.
I think is highly contextual. If you’re linking to one specific page somewhere, then yes. In the context of a single page, then no javascript is likely better then some javascript.
But if this is a part of a web app, then your users are usually not opening a modal in a new tab, then going back to the first one to continue working in the app. Apps like that have javascript cost anyway, but it can still be optimized to look fast.
I still like your comments, waaay better then I do the original article. Yes, they can be tricky to get right on mobile. People use huge libraries just to get some fancy modals. To get them right requires a lot of attention to details.
That’s not to say that using a separate page can’t be a better experience, I just think this is contextual.
I see where the author is coming from; it can be very discouraging to see people flock to shitty, broken, privacy-destroying “products” over “projects” whose main sin is being chronically under-funded. It’s easy to say, “well, people just don’t care about their privacy,” or whatever the VC violation du jour happens to be.
But many people - millions of people - are willing to put up with a little bit of technical difficulty to avoid being spied on, being cheated, being lied to in the way that so many SaaS products cheat us, lie to us, and spy on us. It’s a tradeoff, and there is a correct decision.
There are basically two options; I don’t know which one is true. Either:
a. with sufficient funding, regulation, and luck, we can build “products” to connect people and provide services that aren’t beholden to VCs or posturing oligarchs, quickly and effectively enough that the tradeoff becomes easier to make, or
b. we cannot, and capital remains utterly and unshakably in control of the Internet, and the rest of our daily lives, until civilization undergoes some kind of fundamental catastrophe.
From the author’s bio:
I appreciate that work. It’s vital. Unfortunately, I’m pretty sure we’re going to lose this war unless we can implement drastic, radical regulation against VC-backed tech companies on, frankly, completely unrealistic timelines. We’re already living the cyberpunk dystopia; get ready for the nightmare.
I think the funding discussion needs to start with what libre project are. For all his faults, I think this was best summarized by Drew Devault: Open source means surrendering your monopoly over commercial exploitation . Or perhaps to rephrase, libre software is a communal anti-trust mechanism that functions by stripping the devs of all coercive power.
This is a useful lense to view libre software because plenty of projects are so large that they have a functional monopoly on that particular software stack, which provides some power despite the GPL. Also, “products” (which are UX-scoped) are usually of larger scope than”projects” (which are mechanism-scoped), almost by definition. A lot of libre projects are controversial almost exclusively because they’re Too Big To Fork.
So I think there are two sides to the problem here: decreasing the scope (for the reasons above), and increasing the funding.
Funding-wise, the problem is that 1) running these systems requires money, 2) whoever is providing the money has the power, and 3) users don’t seem to be providing the money.
So, there are three common solutions to this: get the funding from developers (i.e. volunteer-run projects), get the funding from corporate sugar-daddies (either in the form of money or corporate developer contributions), or get the funding from average consumers.
Volunteer-run projects are basically guaranteed to lose to corporations - most devs need a day job, so corporations will largely get their pick of the best devs (the best devs are essentially randomly distributed among the population, so recruiting from the much-smaller pool of only self-funded devs means statistically missing out on the best devs), and typically results in the “free as in free labor” meme.
Corporate funding will, even with the best of intentions by the corporations in questions, tend to result in software that’s more suited to the corporate use-cases than average users - for instance, a home server might primarily need to be simple to set up and maintain by barely-trained users whereas Google’s servers might primarily need to scale well across three continents. This has two effects: first off, it increases the scope, (which is bad per the above paragraphs), and saps the priorities of the project if there ever need to be hard decisions. Second, it gives coercive power to the money-holder, obviously.
So the last option is getting the funding from the average consumer. Honestly, I think this is the only long-term viable solution, but it basically involves rebuilding the culture around voluntarism. As in, if everyone in the FOSS community provides e.g. a consistent $20/month and divvies it up between the projects they either use or plan to use, then that could provide the millions/billions in revenue to actually compete with proprietary ecosystems.
…or it would provide that revenue, if everyone actually paid up. But right now something like 99% of Linuxers et al don’t donate to the libre projects they use. Why is that?
Well for starters, businesses pour huge amounts of effort into turning interested parties into paying customers, whereas plenty of open-source projects literally don’t even have a “donate!” page (and even if they do have such a page, plenty of those are hard to find even when actively looking for them), let alone focusing on the payment UX.
IMO, there needs to be a coordinated make-it-easy-to-pay project (or should I say “product”?), where e.g. every distro has an application preinstalled in the DE that makes it easy to 1) find what projects you most often use (and also what you want to use in future), 2) set up payment, and 3) divvy it up in accordance to what you want to support.
BTW, I hate the term “donate” (and you’ll notice I don’t use it) because it displays and reinforces a mindset that it’s a generous and optional act, as opposed to a necessary part of “bringing about the year of the linux desktop” or “avoiding a cyberpunk dystopia” or such.
Mastodon is already at millions of MAU, not to mention all the other ActivityPub servers out there. Most people who use ActivityPub don’t really want celebrities to join.
I currently have 5 accounts on ActivityPub servers of various kinds, and with the exception of the overlap between my “professional” and “main” accounts, each one sees a completely different slice of the social archipelago. I have tech and meta discussion on my main accounts a lot, and some news; mostly visual art, spirituality, mental health discussion, and music on my spiritual/mental health advocacy account; personal posts from friends and friends of friends on my private account; and pretty much only microfiction and visual art on my microfiction account.
I bring this up because, while I don’t think that metrics like this are particularly helpful in measuring the success of the network (for example, these metrics don’t include posts from instances that don’t provide this API, and don’t include posts from instances - even moderately-sized ones - that don’t federate widely), I do think people who aren’t embedded in ActivityPub/social archipelago cultures don’t understand the diversity of content that was available even before the Twitter Exodus. There are a lot of people making a lot of art, discussion, commentary, and even reporting via ActivityPub, and most people can find a niche they enjoy.
I think the fansite for Ferris the Crab would break with the new Trademark policy.
https://rustacean.net/
The domain name might be an issue, but Ferris is not part of the trademark for Rust or Cargo. Thankfully, Ferris is in the public domain.
Aside: I wouldn’t call it a fansite exactly — that site was (and AFAIK still is) run by the creator of Ferris, originally to showcase their proposal of a mascot for Rust. Evidently it succeeded. :)
Edit: Oh, I guess you might have meant that it’s a fansite with respect to Rust, rather than to Ferris.
I’m obscurely pleased that it targets the VT100. So many TUI programs these days only work properly with 256 or more colors and very high bitrate.
I plan to use this on my VT420. Very exciting!
I’m already happy with Emacs on my VT420, because of the heavily optimized redisplay for lower baud rates and the mitigations for XON/XOFF flow control. But I’m still excited for a new terminal editor that doesn’t require a GPU accelerated terminal (they have played us for utter fools!).
I agree! I wanted something where you could log into any old server, or use a simple single board computer, and still have a working editor.
This is a fascinating piece of writing which I agree with in general direction but abhor in specific. In particular, the author takes an entirely regressive tack when discussing the trajectory of the development of computing.
Has it, though? Certainly there are techniques for building beautiful objects that are lost, but modern-day artisans still make beautiful weapons, from swords and armor to bows and firearms. The practice of creating beautiful things is not gone; rather, the Met’s Arms and Armor gallery discussed by the author is the collected, filtered product of nearly every armorer in the world from as early as 1300 BCE to today. Of course the things we see in our everyday lives do not meet that standard.
The author parlays this mistaken idea into a generalized sentiment that there is no such thing as a beautiful computer today, but of course, there absolutely is. Even in the author’s own conception of a beautiful computer as a heavily restricted one, there is a thriving community of artisans building some constrained computers of various aesthetics. There’s even prior art for beautifully woodworked machines. And that’s just portables - if we get into desktop machines, there’s a whole world of form-over-function (or form-beside-function) designs out there, from the 1970s to the vast diversity of PC case mods or scratch fabs, including many wooden examples.
The other problem I have is the essentialization of Western culture and Japanese culture. I think it does a disservice to the essential criticism here: we build too much cheap shit that doesn’t look nice or work well, and we should build more thing that do what people want and need, look good, and are easy to maintain. The solution is not to eschew networking, software other than a text editor, and so forth. The solution is to build a society that allows people to spend time producing art without fearing they’ll be out on the street because the thing they’re building isn’t profitable.
The woodworking is indeed beautiful, but as I’ve written before, I prefer the vision of another Japan-opinion-haver (William Gibson) in the direction of coral and turquoise.
I don’t think you have to limit “regressive” to the history of computing in particular when the author namechecks Oswald Spengler!
Oof, missed that one. Yikes. 🥴
I don’t have the cultural reference to really understand how that should be understood.
La Wik:
Thanks a lot for the reference. Now, I don’t see how I should interpret this in regard to the text. Is it a joke? A simple cultural reference? Or something else more cringe?
I see this kind of rhetoric a lot around both LLMs and diffusion models, and it worries me.
We’re engineers, right? We’re supposed to be metacognative to at least some degree, to understand the reasons we come to the conclusions we come to, to discuss those reasons and document the assumptions we’ve made. Maybe what this conversation needs is some points of comparison.
LLMs just predict the next word by reading their input and previous output, as opposed to creating rich, cyclical, internal models of a problem and generating words based on (and in concert with) that model, as I (and I believe most humans) do.
LLMs and diffusion models are just trained on an input data set, as opposed to having a lifetime of internal experience based on an incredibly diverse set of stimuli. An artist is not just a collection of exposures to existing art, they are a person with emotions and life experiences that they can express through their art.
Are you a P-zombie in your own mind? Are you devoid of internal experience at all? Have you never taken in any stimuli other than text? Of course not. So, you are not just an autocomplete system based on your own stories.
Note that I am not saying you could not build a perceptron-derived model that has all of these things, but current LLMs and diffusion models eschew that in their fundamental architecture.
First off, I want to say I agree with you in that I hope it was clear in the post that I don’t actually believe LLMs are currently at par with humans, and that human brains probably are doing more than LLMs right now. Also, of course it’s obvious that an LLM has not been trained on the same kind of stimulus and input data.
But the point of my post, said a different way, is that your comment here is making a category error. You’re comparing the internal experience of consciousness with the external description of an LLM. You can’t compare these two domains. What an algorithm feels like on the inside and how it is described externally are two very different things. You can’t say that humans are creating “rich, cyclical, internal models of a problem” and then only point at matrices and training data for the LLM. The right category comparison is to only point at neurons and matrices and training data.
The difference in training data is a good point, but LLMs are already beginning to be trained on multimodal sensory inputs, so it’s not a huge leap to imagine an LLM trained with something roughly equivalent to a human lifetime of diverse stimuli.
I agree with you that my metacognitive experience is a rich, cyclical, internal model and that I’m generating and thinking through visualizations and arguments based on that model. I certainly don’t think I’m a P-zombie. But you are not substantiating which neuron substructures are causing that that the machines are lacking. Do you know for sure the LLMs don’t also feel like they have rich, cyclical, internal models? How can you say? What is the thing human brains have, using external descriptions only (not internal, subjective, what it feels like descriptions), that the machines don’t have?
It’s not central to your point, and you can probably tell from the “this seems obviously wrong, so I must not be missing any important details” part, but what you said about philosophy of mind in the first three paragraphs is inaccurate. Cartesian dualism was an innovation in C17 and the prior prevailing theories were somewhere between monism and substance dualism. Hylomorphic dualism is probably closer to modern functionalism about mind.
Ironically, this reply:
is an untenable view on a materialist view of consciousness, and a distinction between “internal” descriptions of experience and “external” descriptions of the experiencing agent is one of the major reasons in favor of non-physical views of consciousness. The eliminative materialist would simply deny that there are such things as rich internal models and suchlike.
I’m clearly out of my depth with respect to philosophy or I would have attempted to be more accurate, heh, even considering my flippant attitude towards Descartes.
So, forgive the junior question, but are you saying that having a different experience of something based on your perspective or relationship to it is incompatible with a materialist view? What would materialism say about the parable of the blind men and the elephant?
No, the critical distinction you made above is between the internal and the external. When a human experiences red, there is a flurry of neuronal activity (the objective component) and the human has the experience of redness (the subjective component). The hard problem of consciousness is explaining why there is a subjective component at all in addition to the objective component. People who think that LLMs aren’t “conscious” are denying that LLMs have that subjectivity. The idea of a P-zombie is someone who has all the neuronal activity, but no internal experience of redness, i.e. someone who has all the objective correlates of consciousness, but there’s “nobody home”, so to speak.
I think Chalmers’ work is accessible on this subject, so you might try reading his “Facing up to the problem of consciousness” if you want to learn more.
For the blind men and the elephant, the question isn’t the difference between the men having different experiences, it’s between the different components of the experience each is having.
I don’t think it is a category error. We can observe at least part of the mechanism by which our brains form the cognitive models we use to produce speech. When I say “rich, cyclical, internal model”, I’m being specific; our brains think about things in a way that is
LLMs do not do this; as far as I know, no artificial neural network does and no existing ANN architecture can. We can observe, externally, that the processes our brain uses are dramatically different than those within an LLM, or any ANN in these ways.
okay, i didn’t understand the specificity you meant. however:
isn’t this similar to what GPT4’s multimodal training claims?
isn’t this like back propagation?
isn’t there a whole field dedicated to trying to get ANN explainability, precisely because there are internal representations of learned concepts?
I’m not familiar with the details of multimodal training, thank you for bringing that up.
As to the other two, I specifically don’t mean backpropagation in the sense that it’s typically used in ML models, because it’s not continuous and not cyclic. In a human brain, cycles of neurons are very common and very important (see some discussion here), and weight updating happens during thought rather than as a separate phase.
Similarly, while it’s true that ANNs have internal representations of concepts, they specifically cannot feed back those internal representations; even in use cases like chat interfaces where large amounts of output text are part of the inbound context for the next word, those concepts have to be flattened to language (one word at a time!) and then reinflated into the latent space before they can be re-used.
Generating rich models of a problem can be phrased as a text completion task.
To the degree that they are expressed through their art, LLMs can learn them. They won’t be “authentic”, but that doesn’t mean they can’t be plausibly reproduced.
- I don’t disagree. Text models don’t do these things by default, on their own. But that’s less of a skill hindrance than it sounds, as it seems to turn out lots of human abilities don’t need these capabilities. Furthermore, approaches like chain of thought indicate that LLMs can pick up substitutes for these abilities. Third, I suspect that internal experience (consciousness) in humans is mostly useful as an aid to learning; since LLMs already have reinforcement learning they may simply not need it.
I’m on call this week, and I’m working on finishing some things up before I take some PTO next week to help my partner recover from surgery. I’ve got a ton of miscellaneous open source nonsense to catch up on, and I’m also working on the longest solo writing project I’ve ever done. There’s definitely some adjusting to do!
I’m sure the developer(s) and maintener(s) of PeerTube will appreciate.
This is actually a super fair point, thanks for mentioning it. I haven’t used PeerTube in a while, but it’s pretty good. I’ll amend that line.