MUA’s should have no issue since they are supposed to correctly parse html. Other applications, like word processors, especially the ones who immediately turn anything.abc into a URL will have a hard time.
As an experiment, it is pretty cool this can be done. Practical use, I won’t do it ever - my wachine machine maker has no business knowing my cloth washing behavior.
Interesting example of buying alcohol (never thought about it).
My primary example tends to be an employee badge that is verified at the front door by a security personnel (AuthN). Once you are allowed entry into the office, AuthZ determines which areas are you allowed to access (doors to certain labs, HR and Finance filing offices, etc. may not unlock with the same badge). Of course, most of the time the discussion about AuthN and AuthZ is with engineers, so this example is pretty close to home.
Also, the buying alcohol example is uniquely Western, and may not apply to all audiences worldwide.
And of course, AuthZ comes AFTER AuthN. :)
What’s weird to me is that OpenId Connect does authentication, but, that’s a protocol on top of OAuth which does authorization. AuthZ should come first, but isn’t that backwards? I don’t really get it, not yet anyway.
Well, openID is about identification, which is what authentication is also about.
It doesn’t really make sense to do authorization and then authentication. That would be like the shopkeeper checking whether your ID says you’re old enough, then selling you the alcohol and then only afterwards checking if the ID is yours: by then it doesn’t matter because you’ve already got the alcohol.
This depends a lot on the situation. To give a real-world example, my front door checks that I am authorised to open it by checking whether I have a key. It doesn’t perform authentication at all and that is a feature: the key works if I lend it to someone.
In a lot of situations, you don’t care which party is performing an action (except, perhaps, for audit logging), you just care that they are authorised to perform that action. If I have a private repository, for example, I might care a lot about who pushes to it so that I know who introduced bugs, but not care who downloads it as long as they are people that I have authorised to do so.
I was trying to say that I think it makes sense to do authorization only, or to do authentication and then authorization, but that it does not make sense to do authorization and then authentication.
That said, your example made me think of some examples where authentication is optional, and then you might want to authorize and then (optionally) authenticate (e.g. pushing a mix of signed and unsigned commits to github).
I think the use cases where you’d want to do authorization and then authentication are ones where you’re doing authentication solely for auditing. This is especially common if you have a low-privilege step that you don’t need attribution in your audit logs and a high-privilege step where you do. I suppose this can be thought of as authorise-only then authenticate-then-authorise, because then you’d want to ensure that you had the information for your audit log before you permitted the operation that required authentication.
I don’t see how this will change anything… There have always been good and bad scammers. In the end, I believe if you understand how your bank or expenses communicate with you, and how you can verify communication, there is nothing to brace for. If anything this may just make email domains more important to flag as “verified” or “unverified”. I would be more worried about receiving physical mail that looks a lot like official letters from your bank.
I don’t see how this will change anything
Most scams targeted against my nationality are very easily spotted, simply because their German is that bad. Those LLMs are literally made to spew out text. And based on my little experience with them, they would make a really good scam writer in native German. They can easily hold a conversation without getting weird, or running out of interest because it takes too long to get you. If you set those up correctly, you can operate a giant pipeline of scammy bots. Who cares if it takes half a year to befriend you, just run enough of them in parallel, you will win eventually. Also the CodeGPTs make it even easier to spew out fake websites (relevant for good scam mails) that operate well enough to fool you. It’s not about being perfect, it is about making it even easier to find the few required needles in the haystack, that actually give you money.
You know the most successful scam here currently ? An SMS with “Hello I’m your daughter, I lost my phone, can you text me on whatsapp to this number?” What if you train those bots to be good enough to look like a typical teenager in problems ? Heck you can even let this self-learn the working teenager slang without much additional overhead.
I’m not trying to create fear, it is just very obvious how much easier and successful it becomes to operate those scams now. I’m waiting for the day a company data leak turned into a giant money loss, because they later got scammed based on the leaked email exchange. Sure that’s possible already, but why try to pick which of the employees you can use, and what kind of exchange and subject is normal, when you can totally automate it. If “hey boss, please send money to XX for that invoice” already works well enough, imagine what this could do.
Most scams targeted against my nationality are very easily spotted, simply because their German is that bad.
The point often raised in discussions of this is that scammers already have the ability to send at least the first message or two written in a clear, fluent form of the target’s preferred language. But they choose not to, because sending a message in bad German (to you) or bad English (to me) acts as a filter. You spot the problem and immediately ignore it, so only people who are fooled enough/greedy enough/unaware enough to miss or overlook the problem will respond, which means that the later stages of the scam have a higher proportion of people who will miss or overlook all the other warning signs and go all the way through to giving up money.
The article specifically addresses this, and one of its core predictions is that LLMs will reduce this reliance on early filtering: They will make stringing marks along much cheaper, so there’s no need to concentrate on only those that have low drop-out rate later in the process.
That’s a good counterpoint. I still think that with better technology, you don’t need to filter out so much. So better language becomes interesting.
Edit: And as I’ve just seen simonw’s post: Yeah, romance scams and well worded disinformation (spambots) which spread fears and made up stuff are a thing.
Most scams targeted against my nationality are very easily spotted, simply because their German is that bad. Is the assumption here that good German = a good scam?
Those LLMs are literally made to spew out text. And based on my little experience with them, they would make a really good scam writer in native German.
Uh, based on what data? You’ve had experience with AI writing good scams in the past? We’re all speculating the effectiveness of this.
They can easily hold a conversation without getting weird, or running out of interest because it takes too long to get you. If you set those up correctly,
“If you set those up correctly” is a huuuuge undertaking in itself. It’s like saying “if you scam correctly”!
you can operate a giant pipeline of scammy bots.
I think there’s a lot of other skills required to set up this pipeline. Like technical skills to create the pipeline in the first place.
Who cares if it takes half a year to befriend you, just run enough of them in parallel, you will win eventually.
Sure, given everything is setup perfectly and well, the scam does scam… Again, all speculation.
Also the CodeGPTs make it even easier to spew out fake websites (relevant for good scam mails) that operate well enough to fool you. It’s not about being perfect, it is about making it even easier to find the few required needles in the haystack, that actually give you money.
Uh, phishing websites have been 1:1 for a looooooong time. This is not anything new. There are even programs now that can do it without any effort for awhile.
You know the most successful scam here currently ? An SMS with “Hello I’m your daughter, I lost my phone, can you text me on whatsapp to this number?”
I think there’s more to the scam here than just that.
What if you train those bots to be good enough to look like a typical teenager in problems ? Heck you can even let this self-learn the working teenager slang without much additional overhead.
Show me a scam that involves typical teenager problems from the start to the end and then I can evaluate and answer this better.
I’m not trying to create fear, it is just very obvious how much easier and successful it becomes to operate those scams now. I’m waiting for the day a company data leak turned into a giant money loss, because they later got scammed based on the leaked email exchange. Sure that’s possible already, but why try to pick which of the employees you can use, and what kind of exchange and subject is normal, when you can totally automate it. If “hey boss, please send money to XX for that invoice” already works well enough, imagine what this could do.
It’s obvious that writing proper english or german or X language is easier with LLMs, yes. But there are scammers who are english or german to start with for a long time now.
Okay, random made up example: without ChatGPT 10% of the population fell for a scam. With ChatGPT, it is now 35%. The more “believable” the scam is, the higher chances of falling for it. Of course, there will be some inflection point where people become smarter, but in the near future it seems scammers are going to make some real money!
Yeah they did a number on you there because chatGPT is always going to be too expensive. The interesting takeaway is that we are already getting to the point where we can run half decent AI on consumer hardware.
I guess ‘LLM’ scams is too nerdy but how about this: “AI is coming for your bank account” ;)
Yeah, there’s a tradeoff in giving a piece like this a title – does the average reader understand it vs. does it get at the core nugget / idea?
I think one of our main takeaways is that scam followup, not initiation, is now made way easier by conversational LLMs, the fact that they can be run on consumer hardware, and the fact that they confidently BS their way through anything is a feature not a bug.
I was talking to a scammer selling car parts (that they didn’t have) the other day, and it took me five or six emails to realise they were full of shit. My first thought was, man if this were chatGPT I bet I would have wasted a lot more time on it
Exactly, yeah – saves them time, wastes your time, and has a higher chance of success than an unsophisticated scammer.
I was thinking about this – I guess with LLM-based generated text we may no longer be able to distinguish a scam email than a real one. No more PayPall or silly typos. :-)
The silly typos are on purpose to groom their marks. The idiots don’t notice the typos and those are exactly the people they don’t want to target. Chesteron’s Typos.
But the typos are there only because less gullible responders waste scammers’ time. If scamming is automated, they can go after harder-to-scam victims too.
I know you meant “don’t waste”. Yeah the existence of the LLMs mean that an LLM can stalk and customize their attack to the individual prey. Romance scams will rise. They could construct bots to clone the voice of existing phone support people and their methods. World is getting cyberpunk way too quickly.
That “always” is not going to age well. It is already rapidly advancing towards running on cheaper hardware. It’s likely already cheaper for triaging spam responses and followups than manual labor, and is more scalable.
That’s the nuance in the comment and the discussion about the title. ChatGPT is the model by OpenAI. It is expensive not because it requires beefy hardware to run, but because OpenAI can charge whatever they want.
But it’s not the only LLM in town and Facebook’s leaked LLaMA can be run by anyone without paying licensing costs. That’s the cheap LLM for anyone to run, but it’s not ChatGPT.
That’s the nuance in the comment and the discussion about the title. ChatGPT is the model by OpenAI. It is expensive not because it requires beefy hardware to run, but because OpenAI can charge whatever they want.
I can’t share the exact costs, but from what I’ve heard internally, the margins on most of the OpenAI-based products in Azure are much lower than you might expect. These things really are expensive to run (especially if you factor in the amortised cost of the insane cost of training, but even without that the compute costs for each client running inference are pretty large). My cynical side says that this is why Azure is so interested in them: consumer demand for general-purpose compute is not growing that fast, if you can persuade everyone that their business absolutely depends on something that can only run on expensive accelerators then you’ve got a great business model (and, since you’re not using them all of the time, it’s much cheaper to rent time on cloud services than to buy a top-of-the-line NVIDIA GPU for every computer in your org).
Yep. My rumor mill is even more dire; scuttlebutt is that OpenAI is losing money on every query! They’re only accepting the situation because of hopes that the cost of inference (in the cloud) will continue to fall.
I’m surprised they’re already running at a profit! (Unless you mean the margins are an even lower negative than the negative I previously thought they were.)
In my experience, the header-land moves faster than application-land, so it is always a “catch-up” game. The overall security of a web-application should not depend only on the availability (or the lack of) a particular header. Especially because these headers are merely guidelines/recommendations to browser agents, and there is no guarantee browsers will honor any of them.
Before CORS, there was just the same-origin policy, introduced by Netscape 2.02 in 1995. CORS gives the server a way to allow requests which would otherwise have been denied by the same-origin policy. If a browser doesn’t support CORS, it will just block requests which violate the same-origin policy, and no security is lost.
But the weird part (to me) is that the browsers don’t block the requests (for requests that do not need to be preflighted), they actually send them to the server, they just don’t give scripts access to the response. That feels like the worst of both worlds.
Not long-term enough. My “disposable” servers, aka web/irc/random are usually Debian, but I hate redoing mail servers, so there Ubuntu LTE comes in handy. Only doing that dance every 5 years is great.
Debian supports every release in LTS for about 5 years: https://wiki.debian.org/LTS
Buster was initially released in July 2019, and will leave LTS in June 2024.
Yeah, no.
If your installation isn’t a few decades old, that’s not long-term.
https://www.theregister.com/2022/07/25/ancient_linux_install_upgraded/
Set up your config in Nix and then you can literally make the ISO with with the whole system preconfigured (assuming you don’t need to deal with partitioning). If you ‘redoing’ is difficult, the write make it reproducible so it’s not difficult.
You’re missing the point, but it’s my fault for not making it clear. It can only be reproducible if you stick to the same software stack, or sometimes the same major version.
I choose LTS for mail servers because I decide on something, and then only apply security updates for a few years, I am not switching to newer versions, that’s why I don’t see how nix would bring any benefit. Yes, I admit that this is a pet and not cattle, but I’m not an enterprise, I have a mailserver for 4 people. This is how I want to to run it. What you proposed would work for what I said about the “disposable” servers, e.g. web. I can probably stomach an Apache 2.2. -> 2.4 upgrade every decade, but it’s a completely different problem.
Also this is more of a feeling, but I don’t have high confidence that the nix demographic would backport security fixes for whatever-smtpd 2.x on 2023 until 2028. I’m kinda confident I would be able to get 3.x and 4.x during those years, and pretty fast.
Nix expects you to stay mostly on track. There is certainly backporting for security, but I they mostly want you on the latest version. The way the modules are set up though, they really help mitigate the entropy issues of a typical Linux set up where configs grow stale and the machine falls over. The modules will either abstract away and migrate for you or be explicit about deprecation so you will not be able to upgrade without putting your config in a working state again to move from Apache 2.2 to 2.4 if the underlying configuration changed. Also if a config gile does break, you can reboot from an older working version. If you keep the state in a VCS, then it’s trivial to spin up that exact state on a different machine too (minus the stateful parts like the actual mail).
Personally I would prefer doing regular upgrades here and there and making sure I’m kept up-to-date than languishing on a stale version (I understand it involves more maintenance).
Unless it connects over Bluetooth. Then it won’t, probably ever, because having support for the one industry-wide wireless-peripheral standard is apparently less important than being secure so they removed the entire subsystem.
I am not mocking. This really happened.
I do not build distros, and I am very happy to say that I don’t build or run servers any more, but given what OpenBSD is and does, how come projects like m0n0wall, pfSense, or OPNsense don’t use it? Isn’t this exactly the sort of thing it should be ideal for?
Why would you use bluetooth on a server?
The make opinionated choices and security has their highest priority. I like that.
Well, as an example, before I deployed an OS on a server, I’d want to be very familiar with it. Run it for a while first. Try it in a VM, maybe run it on a desktop for a while.
I have machines that only have a Bluetooth mouse. Frankly, if an OS says “we support machine X” – as for example, OpenBSD says it supports M1 Macs – then I’d expect that everything on the machine worked. So for instance if I had an M1 Macbook Air with a Magic Mouse and Magic Keyboard (I don’t, but I’ve owned both and didn’t like them and sold them), I’d expect them to work.
But they won’t.
For clarity, I have reviewed OpenBSD:
https://www.theregister.com/2022/04/22/openbsd_71_released_including_apple/
https://www.theregister.com/2022/10/21/openbsd_72_released/
This is the sort of question I got asked, on the Reg and on Twitter and so on.
It is a real issue and it really does affect people.
If you had, say, only got a Bluetooth keyboard and mouse on a Mac, which many many Mac owners do, then it won’t work, even as a server.
So, yes, I think this kind of thing does matter, and matter a lot.
See my reply to @frign above.
Seems like an inconveniently short release cycle for a server. I’m running openSUSE leap/tumbleweed on all my (Linux) VPS servers now and I have no complaints. Ubuntu is “fine” but snapper was not really a selling point for me.
UI is out of the picture for a server, so it boils down to the linux distro one is comfortable with. Package manager familiarity and related plumbing scripts also play a role in the overall comfort aspect. Other than that, all Linux are created equal, IMO.
I think the key idea here is the idea of unintended misuse, from the quote “The larger a language is, the easier it is for users to misuse it without even knowing it.” C++ suffers from a vast proliferation of “foot-guns,” which is a colloquialism that I allege sometimes means the same thing: features that engender misuse. Another aspect of it is interactions between features that complicate reasoning. The classic example from C++ is the relationship between default arguments and overloaded functions. Most languages don’t have both these features. This leads us to the discussion of orthogonality—the idea that features A and B are non-interacting. “Bigness” becomes a design smell because it suggests to us that the cross product of features is getting unmanageable by humans, so there could be unpredicted interactions—the condition necessary for unintended misuse.
But we still have “big” languages like Python for whom the main sticking point is something other than the size of the language itself—package management or performance, for Python, are usually the bigger complaints than Python’s linguistic complexity. I allege that this is because there is a certain unity of design there which is nudging the evolution of the language away from non-orthogonal features. Rust also has this. And I think this is why you see strong reactions to these languages as well—love them or hate them, they have a design ethos.
I think it goes both ways – if the language is small (like C), it also encourages foot-guns. I don’t know what is the optimum middle-ground here though.
I’m not a C practitioner, but my sense is that unintentional misuse of C is largely about the memory model and pointers. C++ has these same problems but doubles the surface area (because new
and malloc
are both present) and then increases it more by making it difficult to tell when allocations occur, and then making it difficult even to tell if you’re looking at function calls or something else thanks to operator overloading. C has a difficult computational model to master, but C++ adds quite a bit of “language” on top of a larger computational model.
Someone really needs to explain the bashing on operator overloading. Function overloading doesn’t get nearly as much criticism, and it’s the exact same thing. Perhaps even a bit worse, since the dispatch is based on the types of arbitrarily many arguments.
And by the way, it’s the absence of operator overloading that would surprise me. First, to some extent the base operators are already overloaded. Second, operators are fundamentally functions with a fancy syntax. They should enjoy the same flexibility as regular functions, thus making the language more orthogonal.
(Now you probably don’t want to give an address (function pointer) to primitives of your language, and I know operators tend to implement primitives. That’s the best objection I can come up with right now.)
I think there are two sources of objection, one named by @matklad below having to do with performance-oriented developers coming from C. The other pertains to overloading generally and is (AFAICT) based on the non-orthogonal combination of function overloading with functions permitting default arguments that makes resolution cognitively demanding even on people who like operator overloading in other languages.
Yeah, in my experience Rust’s overloaded operators work pretty well, because there’s no default args or overloading of function args. If you have an operator somewhere in your program, there is exactly one function it always calls in that context, determined 100% by the type of the first argument. That’s a lot easier to reason about.
determined 100% by the type of the first argument.
Not really
My impression, as someone who is about halfway through the Rust book, is that in general Rust provides abstractions but does so in a way that is unlikely to lead to unexpected performance issues. Is that your experience?
More or less. Doing things with a potentially-expensive performance cost is generally opt-in, not the default. Creating/copying a heap object, locking/unlocking a mutex, calling something via dynamic dispatch or a function through a pointer, etc. Part of it is lang design, part of it is stdlib design.
That’s the best objection I can come up with right now.
But that’s the thing! That’s exactly what perf-sensitive people object to: needing to mentally double-check if +
is an intrinsic, or a user-defined function.
The second class of objection is to operator overloading, which also allows defining custom operators and precedence rules. That obviously increases complexity a lot.
The third class of objections is that implementing operator overloading sometimes requires extra linguistic machinery elsewhere. C++ started with a desire to overload +
, and ended up with std::reference_wrapper
, to name a single example.
It would be neat to have a language where the intrinsics are defined like functions, but then operators can be defined to call the intrinsics. So, if your CPU has a div+mod instruction, you can call __divmod(x, y)
, but to make it convenient, you can bind it to a custom operator like define /% <= __divmod; let z, rem = x /% y
.
I don’t think C++ is a good example in this discussion, because it’s an outlier in language design. It’s not just “big”, but also built on multiple layers of legacy features it doesn’t want any more, but can’t remove. There is a lot of redundancy and (if it wasn’t for back compat) unnecessary complexity in it. So it’s not a given that a language that isn’t small is necessarily like C++.
Rust is relatively big and complex, but mostly orthogonal in design, and has relatively few surprising behaviors and footguns. Swift went for big and clever design with lots of implicit behaviors, but its features are not as dangerous, and apart from SwiftUI, they don’t fragment the language.
On the contrary, I think that the whole point of the article is to suss out what it is about big languages that make them worrisome, and the tendency of languages to inflate over decades. C++ is a pathological case in many ways but Rust and Swift are still very young.
I keep hearing this “if it keeps growing, it’ll end up like C++”, but I don’t think this has actually ever happened. I can’t think of any language that has painted itself in a corner as much as C++.
Scheme is older than C++. Ada and Erlang are about as old as C++, and did not jump the shark. Java and C# have been growing for a long time now, expanded a lot, and still hold reasonably well. Even PHP that has a reputation for being a mess and has tough backwards compat constraints, managed to gradually move in the less messy direction.
I can’t think of any language that has painted itself in a corner as much as C++.
As much as C++, and survived? None that I can think of. Honorable mentions? I can think of several: Perl 5, bash/unix shell, PHP. Scala and C# keep trying to get there too, from what I can tell.
And on the other extreme you can be convicted of murder, triggering a series of events that lets your work go to waste. Hans Reiser and Reiser4.
This is a weird, unnecessary comment. “Can be” like it’s something that just happened to him, instead of something he did and faced consequences for. And why bring it up at all, when it’s totally not relevant?
Apple maintains a tighter control over what apps are allowed on its phones, so users would have to “jailbreak”—or manually remove restrictions from—their devices to install TikTok.
While it’s tremendously stupid to ban TikTok and leave Facebook alone, one silver lining could be increased popularity of jailbreaking procedures and an easy-to-convey example of why walled gardens suck.
easy-to-convey example of why walled gardens suck.
Pointless, IMO - we had walled gardens back in the day with AOL and Het Net and people did learn they sucked. But now we’re back to where we started in many ways. People forget such lessons quite easily.
We’re seeing a mass exodus from Twitter to Mastodon, but that’s not because people value Mastodon’s decentralised character. It’s just a usable alternative that happens to be there.
Even developers flock to such centrally controlled platforms as GitHub, while having known Google Code shutting down, SourceForge’s bad practices and things like BitBucket shutting down Mercurial hosting etc.
Blame network effects and “laziness” - most people value convenience, usability and low cost over everything else.
I doubt it will ever be a “all or none” scenario. Think of the usefulness to society - TikTok serves no useful purpose than entertainment, which can be easily dispensed with, while FaceBook does allow people of common interest to share ideas and thoughts (like local hiking groups, meetups and so on). Policy makers have to take into account several such nuances before making the call, so I don’t think it is “tremendously stupid” to ban TikTok and leave FaceBook alone.
That said, knowing the history of CCP Politburo, I am inclined to ban TikTok first and then look at FaceBook later (though thankfully I am not one of the “policy maker”). :-)
TikTok has heaps of educational content; if you demonstrate an interest in that, it will show it to you.
TikTok serves no useful purpose than entertainment
This is just not true. Tons of TikTok videos are educational or informative. I’ve learned things about linguistics, cooking, history, woodworking, geography, botany, chemistry… you name it. Since TikTok tends to show you lots of videos from people you don’t follow, I’ve also learned a lot about other cultures—I see a lot from people in other countries, and people whose social or cultural groups are not well represented in the other online (or offline) spaces I inhabit. I’m not a fan of the CCP either, but I don’t think it’s fair to write off all of TikTok as frippery just because a lot of the content is “entertainment.”
TikTok serves no useful purpose
Honestly that just makes it worse from a data protection angle. If your concern is protecting people, it’s already very easy to avoid the harm of TikTok, but without help from regulation, protecting yourself from Facebook is much more difficult.
There’s another catch-22, also related to TikTok, which I think shows just how unnecessary the whole “national security” angle is. I don’t know if TikTok poses one or not – I think that, in fact, is mostly irrelevant when it comes to data collection, for reasons that Schneier already points out in the article so I’m not going to repeat them.
A while ago TikTok caught some flak because someone found out the Chinese version also has a bunch of educational content for kids, which it pushes to young users based on some timing and location rules. It’s obviously attractive to frame this as a conspiracy to brainwash Western children’s minds, but the explanation is a lot more mundane: Chinese legislation forces all media companies that offer content to children under a certain age to include some proportion of educational content. Western countries could enact similar legislation – or, in the case of some of them (including, I believe, my home country), re-enact it, as they once had such legal provisions and eventually dropped them.
Same thing here. If TikTok does, indeed, collect data that poses a national security risk, at least some of that data is bound to be data from some high-interest people’s devices, not from everyone‘s device. There’s no way collecting that data from a handful of people’s phones is risky for national security, but collecting it from everyone’s devices is not risky for consumers’ and private citizens rights in general. Most Western countries already have the constitutional basis to ban this if they want it.
The “national security” risk angle is related to TikTok’s ability to control narratives, shape thinking and potentially influence elections in the US. Of course, this can be argued about FaceBook and other social media companies, but none of them have the share among young people as TikTok does.
Yeah because facebook wasn’t a hotbed of misinformation for older conservatives. Like. Facebook is notoriously awful about letting troll farms just manufacture whatever narrative they want and it just not being questioned.
Those old conservatives you clearly despise (don’t worry they’ll be dead soon) are citizens. The CCP are not.
Yes because citizenship is the issue here/s I’m all for placing restrictions on social media companies to have responsibilities wrt misinfo but like. Fox News exists in America, we don’t get to point fingers here.
Many countries/regions have rules restricting “foreign” apps/companies/etc. from receiving or processing their citizens’ data.
Not saying that it’s good to do that, but it’s not some sort of unprecedented new thing here, and any reasonable discussion about it has to understand and account for current practice.
A while ago TikTok caught some flak because someone found out the Chinese version also has a bunch of educational content for kids, which it pushes to young users based on some timing and location rules. It’s obviously attractive to frame this as a conspiracy to brainwash Western children’s minds, but the explanation is a lot more mundane: Chinese legislation forces all media companies that offer content to children under a certain age to include some proportion of educational content. Western countries could enact similar legislation – or, in the case of some of them (including, I believe, my home country), re-enact it, as they once had such legal provisions and eventually dropped them.
Sounds like the Chinese equivalent of https://en.wikipedia.org/wiki/Regulations_on_children%27s_television_programming_in_the_United_States , which apparently subtly regulated a bunch of things about TV shows I remember nostalgically from my 90s childhood, and which seems pointless to me in retrospect. They should’ve let Weird Al be funny on TV for kids without having to shoehorn in content that technically counted as educational to fulfill a legal requirement.
IIRC, this is why GI Joe had the little “and knowing is half the battle” skits at the end of every episode, so they could cynically claim to be educational. I think Masters of the Universe had the same thing.
which apparently subtly regulated a bunch of things about TV shows I remember nostalgically from my 90s childhood, and which seems pointless to me in retrospect.
Prologue + Part 1 of this article is required reading on this subject, from a person who grew up in the 80s.
Western countries could enact similar legislation
I don’t want to imagine the kind of children’s educational content that would be legislated, especially in certain areas of the country.
Oh, I’m not implying it’s a good idea. I bet the educational content they push in China is… kind of on par with the kind of children’s educational content that would be legislated in certain areas of the country :-P.
The point I was trying to make is that the delivery of “brainless” entertainment content is not inherent to a platform’s technology or algorithms. Those don’t exist in a vacuum. TikTok, like many other platforms, is doing a bunch of things because #1 they’re allowed to do it without any restrictions and #2 because management thinks it’s a good idea. It’s completely unproductive to create the conditions for #1, and then fuss about how it’s used. As long as #1 holds, anyone who also ticks #2 will handle it the same way, so simply banning the current market leader will just change the market leader, not the things it does.
Similarly, gathering data to facilitate misinformation campaigns is not inherent to the technology TikTok uses, or the algorithms they use. It’s something that TikTok, and other social networks, do because they’re allowed to and because it’s good for their bottom lines. Banning a company, but not the practice, just makes room for another company to engage in it.
It’s kind of like school admins trying to ban chat programs because they interfered with students’ activities back in the day. When ICQ was banned and filtered, we just moved to Yahoo Messenger.
***p has always given me nightmares and I always had to triple-check that section of code to ensure it did what it was supposed to do. ;-)
I empathize with the OP, but once the blog is Digital Ocean’s property, they can do whatever they want with it, including shutting it down or hijacking the clicks.
The thing you said is so obvious, that you can do what you want with a thing that is yours to do with what you want, that I wonder why you said anything at all. What were you going for here?
Exactly. The assumption DO or anyone else is buying up hosted blogs as an act of unadulterated altruism is flawed. There is nothing unethical with any of this. Incompetent? Perhaps.. but certainly not anything to warrant a “DO tyranny be thy name” rant captured by the OP.
One further point I might add - the internet is not immutable. Things are in constant flux. I feel like this doesn’t need to be pointed out, but here we are.
Thank you, I spent more time reading about this. It is an interesting idea though - having a CLI. I guess I will explore more about Himalaya in the future. Good luck.
As a mutt user, I cannot say mutt is a delight to configure. Himalaya appears to have reasonable defaults and a straightforward configuration file.
Disclaimer: not a himalaya user
However, once it is configured, you won’t have to look at it again. Plus you get more and more comfortable with the interface (this is true with any client though, not just mutt).
The “Linux is Linux” mention just cracked me up. And I have to agree with it – I could teach macOS to my parents, but not Linux. I just gave up teaching anyone Linux desktop, its too “fragile” for a non-technical person’s daily use. Yes, there are equivalents, but none that are up to the mark. Heck, Google’s GSuite is way better than any word processor I’ve used on Linux. Enough of digressing though.
I liked the article, overall many points to ponder over. To the author: thanks for taking time to pen your thoughts.
My parents are using Linux, and neither of them is a technical person. They are much happier with LibreOffice than they were with MS Office, and MATE is an opt-out of MS (and Apple, for that matter) constantly changing the UI.
Quote from the article:
- Someone’s gonna reply with how their grandmother uses Linux and it’s actually really easy! Last time I used Linux I had to learn how to cut power to my GPU because otherwise it was always on and dropping my laptop’s battery life to three hours ↩
More power to them! :-)
I never tried MATE, though it seems promising. I just hope it does not become yet-another-attempt-at-UI, like so many other Linux-based UIs’.
It’s a fork of GNOME2 made as a reaction to the GNOME3 attempt-at-UI. It had a pretty awkward initial development stage but now it works as well as GNOME2 did. I’m not saying that it’s a perfect DE for everyone, but it never does pointless redesigns and for me it does everything I want a DE to do, including things that most “modern” DEs no longer can do, like a fixed virtual desktop layout.
Yes, the redesigns and the associated learning curve was one issue I had (hence I said “fragile”). It was as it the Linux DE was perpetually evolving. I will try out MATE. Thanks.
The only way things ever change is if someone makes the first move, instead of blaming each other or waiting on the world to change. I can only control my own behavior, so I have to make the first move. I’m a developer, so I guess that means I have to voluntarily saddle myself with underpowered hardware, so I can feel the users’ pain, perhaps magnified. I’ve already put off upgrading my main PC; I’m currently using a Skylake laptop from 2016. But that’s still a quad-core i7-6700HQ with 16 GB of RAM, so maybe I need to go lower.
I worked at Sun Microsystems in the early 1990s and I remember hearing that this was a policy on some teams that were building UI code. They were given low-end workstations so they would experience their UIs the same way end users would. Can’t say firsthand if it was actually true (I was working on low-level stuff) but it seemed like an interesting concept to me at the time.
That is a smart idea and one heck of a smart manager! Sun was innovative in so many ways, this may very well be true.
Facebook is reported to have had a similar concept, but with internet speed. On Tuesdays, they gave employees the possibility of experiencing their website as if they had a 2G connection. (source: https://www.businessinsider.com/facebook-2g-tuesdays-to-slow-employee-internet-speeds-down-2015-10)
I was so annoyed by the premise (most people do not even have a computer where sizeof size_t == sizeof int
, it’s 2023, come on!), that I just hid this article, but then it’s a very valid rant about crazy arithmetics in C. Rust is already worth using just for fixing this.
I generally get annoyed when someone randomly throws Rust as a substitute for everything that is seemingly wrong with C. I believe each language has its place in the overall scheme of things, be it C or Rust or C++ or Haskell. Trying to solve every problem with Rust may not be the ideal approach I’d be willing to espouse.
I posted this thread on a separate discussion, but I think it is worth pondering over: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2 (the entire discussion is worth reading, IMO).
If one can set aside that it’s just Theo doing his garden variety shouting at things he didn’t make, the actual things he’s shouting are also not especially true or are now out of date. That’s when I realised this mail is from six years ago.
Lots of languages have a place in the future, but it is not true that each language does. Some technologies become outdated for real reasons, and people eventually stop using them.
I don’t know why you would start a new project in a language with such a confusion about types and with so few safety features as C in 2023, unless you had an especially baroque requirement. Even then, I don’t know that it wouldn’t be worth trying to spend some of your budget trying to bend Rust or something like Rust to fit anyway. You’re going to pay a lot of hidden costs in using C, you just won’t get to choose when those costs will come due; they’ll happen in the form of critical memory safety thread safety bugs down the line, often after deployment and to the detriment of economic or other value.
I thought the ILP64 model was a bit more common, but the only thing Wikipedia knows is using it is the HAL Computer Systems port of Solaris to the SPARC64.
In my opinion, it would have been a mistake had there been better choices available. C was one of the very few languages that made life easier for programmers (recall most were using assembly back then), and offered the desired performance benefits of being close to the machine. And it was a small language, allowing quick learning and unparalleled flexibility. Of course, that flexibility came with a tradeoff – it was much much easy to shoot yourself in your foot (and we continue to see these footguns even today!)
Even today I don’t see any formidable opponent to C when it comes to implementing the OS kernel, networking stack, filesystems, and similar serious applications. Rust may be the C replacement, but we have to wait to see where it goes.
On that note, I recall an argument about using newer memory-safe language as a replacement for C on the OpenBSD mailing list, and this email from Theo de Raadt is worth pondering over: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2. If someone cares, the entire thread is worth reading.
Full quote is
For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.
Maybe this has changed since the email in 2017.
The part I find objectionable is the idea that not supporting i386 is a terrible problem for a new language.
You can program Rust just fine on OpenBSD/i386. What you can’t do is include it in the base system.
In OpenBSD there is a strict requirement that base builds base.
As long as at least one supported architecture cannot build Rust from base, it’s not going to get into the OpenBSD base system.
My daughter and her friends created this book, including the illustrations in it. I thought it was pretty cool way to learn molecular geometry, so sharing it here. :-)